AVA-Specific Commands
Special commands unique to AVA Database for data loading, saving, compression, and system configuration.
.load - Load Data from Files
Load data from CSV, binary (.adb), SQL scripts (.sql), or other file formats into tables.
Load CSV Files
-- Load CSV into new table
.load '/data/bt/20251208W/POLARIS1/data/prices.csv' as prices
-- Load with automatic table name
.load '/data/prices.csv'
-- Load into existing table
.load '/data/prices.csv' into prices_temp
Load Binary Files (.adb)
-- Load AVA binary format
.load '/data/prices.adb'
-- Load with table name
.load '/data/archive/prices.adb' as prices
-- Load compressed binary
.load '/data/compressed/prices.adb.lzma'
Load from Python API
import avapy
# Load CSV
avapy.asql(".load '/data/prices.csv' as prices")
# Or use LoadTable function
avapy.LoadTable("/data/prices.csv", "prices")
Load SQL Scripts
Execute SQL script files (.sql) containing multiple commands. Scripts support comments, multiple statements, and all AVA commands.
-- Load and execute SQL script
.load '/path/to/script.sql'
-- Script example: data_pipeline.sql
-- ==========================================
-- This is a SQL script that will be executed sequentially
-- Load data
.load '/data/bt/20251208W/POLARIS1/data/SEC_INFO_UNIV.adb' as securities;
-- Configure output
/v off
/pl 10
-- Run queries
SELECT COUNT(*) FROM securities;
SELECT DISTINCT CIQID FROM securities LIMIT 5;
-- Process data
UPDATE
sector_count: COUNT(*) OVER (PARTITION BY SECTOR)
FROM securities
GROUP BY SECTOR;
-- Save results
.save securities AS '/data/processed/securities_processed.adb';
SQL Script Features
- SQL comments using
-- - Multiple statements separated by semicolons
- Sequential execution (like batch mode)
- Supports all AVA commands (.load, .save, .compress, etc.)
- Automatic error detection with detailed messages
Script Example: Complete Workflow
-- File: daily_processing.sql
-- Daily data processing pipeline
-- Step 1: Configuration
/v off
/pl 1
-- Step 2: Load today's data
.load '/data/daily/prices_2025-01-15.csv' as daily_prices;
.load '/data/master/securities.adb' as securities;
-- Step 3: Data validation
SELECT
COUNT(*) as total_records,
COUNT(DISTINCT TICKER) as unique_tickers
FROM daily_prices;
-- Step 4: Calculate returns
UPDATE
prev_price: LAG(PRICE, 1) OVER (PARTITION BY TICKER ORDER BY DATE),
daily_return: (PRICE / LAG(PRICE, 1) OVER (PARTITION BY TICKER ORDER BY DATE)) - 1
FROM daily_prices
GROUP BY TICKER;
-- Step 5: Join with securities master
CREATE TABLE enriched_prices AS
SELECT
p.*,
s.SECTOR,
s.INDUSTRY
FROM daily_prices p
JOIN securities s ON p.TICKER = s.TICKER;
-- Step 6: Sort and compress
.sort enriched_prices ORDER BY TICKER ASC, DATE ASC;
.compress enriched_prices;
-- Step 7: Save results
.save enriched_prices AS '/data/processed/prices_2025-01-15.adb';
Running Scripts from Command Line
-- Interactive mode
$ ./bin/ava
ava> .load '/path/to/script.sql'
-- Batch mode (avab)
$ ./bin/avab -f /path/to/script.sql
.save - Save Tables to Files
Export table data to CSV or binary format.
Save as Binary (.adb)
-- Save to binary format (recommended)
.save prices AS '/data/export/prices.adb'
-- Save with binary parameter
.save prices AS '/data/export/prices.adb' parameter(format=binary)
-- Save compressed
.save large_table AS '/data/export/large_table.adb' parameter(format=binary)
Save as CSV
-- Save to CSV
.save prices AS '/data/export/prices.csv'
-- Query results to CSV
.save (SELECT * FROM prices WHERE DATE >= '2025-01-01')
AS '/data/export/recent_prices.csv'
Save from Python API
import avapy
# Save as binary
avapy.asql(".save prices AS '/data/export/prices.adb' parameter(format=binary)")
# Or use SaveTable function
avapy.SaveTable("prices", "/data/export/prices.adb")
.compress - Compress Tables
Compress tables in memory using LZMA algorithm.
-- Compress entire table
.compress prices
-- Compress with specific algorithm
.compress prices BY LZMA
-- Compress after reaching size threshold
SELECT COUNT(*) as row_count FROM prices;
-- If row_count > 1000, compress
.compress prices
Compression Benefits
.decompress - Decompress Tables
Decompress compressed tables in memory.
-- Decompress table
.decompress prices
-- Decompress specific file
.decompress prices.adb
-- Typical workflow
.load '/data/compressed/prices.adb' -- Load compressed
.decompress prices -- Decompress for processing
-- ... perform operations ...
.compress prices -- Recompress
.save prices AS '/data/prices.adb' -- Save
.sort - Sort Tables
Sort tables by specified columns (modifies table in place).
-- Sort by single column
.sort prices ORDER BY DATE ASC
-- Sort by multiple columns
.sort prices ORDER BY TICKER ASC, DATE DESC
-- Sort for better compression
.sort prices ORDER BY TICKER ASC, DATE ASC
.compress prices
.odbc - ODBC Database Connections
Connect to external databases via ODBC.
-- Connect to database
.odbc connect 'DSN=MyDatabase;UID=user;PWD=password'
-- Execute query and load results
.odbc execute "SELECT * FROM external_table" as local_table
-- Disconnect
.odbc disconnect
Configuration Commands
System configuration and output control commands.
Verbose Mode
-- Turn off verbose output (recommended for scripts)
/v off
-- Turn on verbose output
/v on
Parallelism Level
-- Set parallelism to 1 thread (sequential execution)
/pl 1
-- Set parallelism to 8 threads
/pl 8
-- Use all available cores
/pl 0
Precision Control
-- Set numeric precision to 5 decimal places
/p 5
-- Set precision to 2 decimal places
/p 2
Full Width Display
-- Enable full width display
/fw on
-- Disable full width display
/fw off
Practical Workflow Examples
Example 1: Data Processing Pipeline
-- Configure environment
/v off
/pl 1
-- Load data
.load '/data/bt/20251208W/POLARIS1/data/prices.csv' as prices
-- Process data
UPDATE
prev_price: LAG(PRICE, 1),
daily_return: PRICE / LAG(PRICE, 1) - 1
FROM prices
GROUP BY TICKER;
-- Sort for better compression
.sort prices ORDER BY TICKER ASC, DATE ASC
-- Compress and save
.compress prices
.save prices AS '/data/processed/prices.adb' parameter(format=binary)
Example 2: Load, Process, Export
-- Load compressed historical data
.load '/data/archive/prices_2024.adb'
.decompress prices_2024
-- Combine with recent data
.load '/data/prices_2025.csv' as prices_2025
INSERT INTO prices_2024 SELECT * FROM prices_2025;
-- Export final result
.save prices_2024 AS '/data/export/all_prices.csv'
Example 3: Using SQL Scripts
-- Create a reusable SQL script: analysis_workflow.sql
-- ======================================================
-- Configuration
/v off
/pl 10
-- Load multiple data sources
.load '/data/securities.adb' as securities;
.load '/data/prices_2025.csv' as prices;
-- Data quality checks
SELECT 'Loaded securities:' as step, COUNT(*) as count FROM securities;
SELECT 'Loaded prices:' as step, COUNT(*) as count FROM prices;
-- Join and analyze
CREATE TABLE analysis AS
SELECT
p.TICKER,
p.DATE,
p.PRICE,
s.SECTOR,
AVG(p.PRICE) OVER (PARTITION BY p.TICKER ORDER BY p.DATE ROWS BETWEEN 20 PRECEDING AND CURRENT ROW) as ma_20,
STDDEV(p.PRICE) OVER (PARTITION BY p.TICKER ORDER BY p.DATE ROWS BETWEEN 20 PRECEDING AND CURRENT ROW) as volatility_20
FROM prices p
JOIN securities s ON p.TICKER = s.TICKER;
-- Export results
.sort analysis ORDER BY TICKER ASC, DATE ASC;
.compress analysis;
.save analysis AS '/data/output/analysis_results.adb';
SELECT 'Analysis complete!' as status;
Run the script:
-- In AVA interactive mode
ava> .load '/scripts/analysis_workflow.sql'
-- Or in batch mode
$ ./bin/avab -f /scripts/analysis_workflow.sql
Example 4: Python Integration
import avapy
# Configure
avapy.asql("/v off")
avapy.asql("/pl 1")
# Load data
avapy.asql(".load '/data/prices.csv' as prices")
# Query
result = avapy.asql("""
SELECT
TICKER,
AVG(PRICE) as avg_price,
MAX(PRICE) as max_price
FROM prices
GROUP BY TICKER
""")
# Save results
avapy.asql(".save result AS '/data/summary.csv'")