The Import Data Tool allows data of different formats to be imported and inserted into databases, tables, and files. The following steps describe how to use the Import Data Tool.

Import will not work for Generic JDBC or Generic ODBC connections

If you want to import files that contain a Binary Large Object (BLOB) data type, make sure to enable the option Convert binary to hex.
Select File > Options > Results and check the box next to Convert binary to hex, then follow the steps below:

1. Select Tools > Import Data from the Menu bar. This will prompt you to choose a server in which to import data. Navigate and select your server and click ok. This will bring up the Import dialog. It is also possible to right-click within the schema browser and select Tools > Import Data to launch the Import Tool.

2. The first tab in the Import Data dialog window is the General Tab. Browse and select the file to import. Once the file is selected a sample of the file will be displayed in the bottom grid to indicate how it will form columns on import. Then, select the encoding and platform the file is formatted in. The sample columns will be refreshed as the options are changed. Select whether the file is delimited or has fixed width columns. If the file has fixed width columns, type the widths of columns separated by commas (eg: 15,25,35,60 ). While typing the column widths the sample data will not change, so make sure to click on the Fixed Width radio box to refresh. Select whether the first row in the file contains the column names to help the import tool map to the table. Last, select the quote identifier for data values. Make sure the sample data being displayed is formatted correctly before proceeding, then click next.

3. In the Format Tab, select the database, schema, and table in which to import the data. To import into a new table, click on the "..." button which will open a Create Table dialog (see the Create Tables page for more) with the columns defined in the sample file. If needed change the names and datatypes of the columns in the table and then click ok. At this point, it is possible to import into the newly created table. If the sample file contains the column names of the values, Aqua Data Studio will make an attempt at matching the column names of the import file to the names of the columns of the table. It is possible to reorder the column mapping by changing the Position column value to match the column number in the sample file or remove the position value to exclude the column from being imported. Once all columns are mapped, click on Create in the left upper corner of the Create Table dialog. This will take you back to the Format dialog (with your table details appropriately adjusted). Continue by clicking the Next button.

4. In the Options Tab begin by selecting whether "(null)" text values are to be converted to NULL values. Then, select the format of the dates & time. Date and time values will be formatted in the text file to be imported into date/time columns. If a date string is to be imported into a VARCHAR column then the format does not apply. There is an option to import directly into the database or generate an SQL file with INSERT statements to import into the database. A sample of the file's values is provided below for configuring the date/time formats. When ready, click the Next button to import the data. If the Import Tool is being used to generate a script file for use in Aqua Data Studio's Query Analyzer, it is strongly suggested that the statement separator "GO" be selected.

5. In the Transaction tab select the type of transaction, Batch Size, Threshold and/or Wait Time that best suits your environment. By default, Transaction Type is set to Full so that the Import takes place in a single transaction. The Transaction Type of Batch allows indicating a specific number of records per transaction by entering a number in the Batch Size field. The Transaction Type of Threshold allows executing a specific number of transactions within a specified number of milliseconds as indicated in the Threshold field.  Wait Time is used to pause between transactions, and, if set to -1, will not wait between transactions.

  • Full Mode executes commits after each row of data. It provides maximum import reliability at the cost of slower speed.
  • Batch Mode executes commits after the specified number of records. If any of the records in the batch is not committed the entire batch is rolled back. Less reliability but faster import because of the fewer commit operations.
  • Threshold Mode executes commits at the specified time intervals. Throttles down import speed, reduces and levels the load on production databases.

If you are importing files that contain BLOB data types, make sure to select Batch mode in the Transaction tab.

6. Once the Status Tab takes focus, the import has begun. It is possible to cancel the import at any time by clicking on the cancel button at the bottom of the dialog. If any errors or warnings occur, they will be displayed in the message text window.

Date and time formats are specified by date and time pattern strings. Within date and time pattern strings, unquoted letters from ’A’ to ’Z’ and from ’a’ to ’z’ are interpreted as pattern letters representing the components of a date or time string. Text can be quoted using single quotes (’) to avoid interpretation. "’’" represents a single quote. All other characters are not interpreted; they’re simply copied into the output string during formatting or matched against the input string during parsing. For more information on how to configure Date, Time and Date/Time formats (including user-customized formatting) throughout Aqua Data Studio, see the Options for Date Time and Date/Time.

LetterDate or Time ComponentPresentationExamples
GEra designatorTextAD
yYearYear1996; 96
MMonth in yearMonthJuly; Jul; 07
wWeek in yearNumber27
WWeek in monthNumber2
DDay in yearNumber189
dDay in monthNumber10
FDay of week in monthNumber2
EDay in weekTextTuesday; Tue
aAm/pm markerTextPM
HHour in day (0-23)Number0
kHour in day (1-24)Number24
KHour in am/pm (0-11)Number0
hHour in am/pm (1-12)Number12
mMinute in hourNumber30
sSecond in minuteNumber55
SMillisecondNumber978
zTime zoneGeneral time zonePacific Standard Time; PST; GMT-08:00
ZTime zoneRFC 822 time zone-0800

The following examples show how date and time patterns are interpreted in the U.S. locale. The given date and time are 2001-07-04 12:08:56 local time in the U.S. Pacific Time time zone.

Date and Time PatternResult
"yyyy.MM.dd G ’at’ HH:mm:ss z"2001.07.04 AD at 12:08:56 PDT
"EEE, MMM d, ’’yy"Wed, Jul 4, ’01
"h:mm a"12:08 PM
"hh ’o’’clock’ a, zzzz"12 o’clock PM, Pacific Daylight Time
"K:mm a, z"0:08 PM, PDT
"yyyyy.MMMMM.dd GGG hh:mm aaa"02001.July.04 AD 12:08 PM
"EEE, d MMM yyyy HH:mm:ss Z"Wed, 4 Jul 2001 12:08:56 -0700
"yyMMddHHmmssZ"010704120856-0700

Import of a JSON file for MongoDB

In the Schema Tree Menu or the toolbar menu, right-click on Tools > Import Data.

  • In the file Select menu, select the file you want to Import.
  • In the General tab, select JSON in the Data Format section. Note that the user doesn't have to select which format will be used. If the element comes with parenthesis then it's considered as a single document. If the element comes with square brackets, it comes as an array and is considered a multiple JSON file. Click Next.
  • Continue clicking through and selecting in the Format and the Options tab
  • In the Status tab, you will see the import status and eventual completion. 


Aqua Data Studio supports four types of JSON files to Import:

  • Line Delimited JSON

{"some":"thing"\n"} 

{"may":{"include":"nested","objects":["and","arrays"]}}

  • Concatenated JSON

{"some":"thing\n"}{"may":{"include":"nested","objects":["and","arrays"]}}

  • Document array JSON

[{"some":"thing\n"}, {"may":{"include":"nested","objects":["and","arrays"]}}]

  • Comma between document JSON

{"some":"thing\n"}, {"may":{"include":"nested","objects":["and","arrays"]}}


Date, Time, and Number Formatting options are not available for this feature.



  • No labels