Skip to content


Syntax Examples

Import a csv file

This method does not appear to support ~/filename or $HOME/filename, but does support relative and absolute paths. The sqlite3 help text says that -csv will "set output mode to 'csv'", but it also affects .import statements.

sqlite3 -csv filename.db ".import path/to/some_file.csv destination_table_name"

Create a table

CREATE TABLE servers (
  hostname VARCHAR(255),
  ip_addr VARCHAR(32),
PRIMARY KEY (id), UNIQUE (id,hostname));

Or from a unix shell

sqlite3 foo.db "CREATE TABLE servers (
  hostname VARCHAR(255),
  ip_addr VARCHAR(32),
UNIQUE (id,hostname));"

Add a column to the table

ALTER TABLE servers ADD os varchar(255);

Add rows to the table from unix shell

sqlite3 foo.db "insert into baz values ('50','some text');"

Add rows or update if the row already exists

This syntax is different from other SQL implementations

insert or replace into tablename(filename, hash) values
  ('/etc/hosts', 'somehash'),
  ('/etc/resolv.conf', 'someotherhash');

Output rows via the unix shell

This outputs as columns, but csv, html line, and list exist too.

sqlite3 -column foo.db "SELECT * FROM baz LIMIT 5;"

If -column is truncating your output, instead use -list -separator ' '

-line is the equivalent of mysql's \G or postgres's \x

Select a random row from a table

sqlite3 -line CostReport-1.db "SELECT * FROM $TABLE
WHERE _ROWID_ >= (abs(random()) % (SELECT max(_ROWID_) FROM $TABLE))
LIMIT 1 ;"

Sum one column ordered by another column

This does a quick cost analysis on an AWS Cost-Report-1.csv file.

## First sanitize the column names
sed -i '1 s#[/: ]#_#g' CostReport-1.csv  # linux sed, macos use gsed
## Next import the csv into a sqlite db
sqlite3 -csv CostReport-1.db ".import CostReport-1.csv costreport"
## Then start a sqlite shell
sqlite3 CostReport-1.db
-- Output Usage Type, ARN or ID, and summed cost as columns
SELECT lineItem_UsageType, lineItem_ResourceId, sum(lineItem_BlendedCost) cost
FROM costreport
GROUP BY lineItem_ResourceId
ORDER BY cost ;

Dump a db from CLI

sqlite3 foo.db ".dump" > foo.sql

Search Skype chat history

sqlite3 "$HOME/Library/Application Support/Skype/daniel.austin/main.db" \
  "SELECT author, timestamp, body_xml FROM messages WHERE body_xml LIKE '%music%' ;"
function skypesearch(){
  /usr/bin/env sqlite3 "$HOME/Library/Application Support/Skype/${skypeusername}/main.db" \
  "SELECT author, datetime(timestamp,'unixepoch','localtime'), body_xml
  FROM messages
  WHERE body_xml
  LIKE '%${searchstring}%' ;"
alias ss="skypesearch john.doe"

Quickly create an image database

## Create the database
sqlite3 images.db "create table images (filename varchar(255), createdate timestamp, unique(filename))"
## Populate the database. This can be blindly re-run when new files are added.
exiftool -d "%s" -p 'insert into images values ("$filename", "$DateTimeOriginal");' -q -f -r . | sqlite3 images.db 2> /dev/null
## Query the database
sqlite3 images.db "SELECT filename,datetime(createdate,'unixepoch','localtime') as date FROM images WHERE date LIKE '2014-08-02%';"