⚡ Teradata BTEQ Migration

Migrate Teradata BTEQ to Snowflake, BigQuery & Databricks

MigryX parses every Teradata artifact — BTEQ scripts, FastLoad, MultiLoad, FastExport, TPump, Teradata SQL, macros, and stored procedures — and converts them to modern cloud-native targets with QUALIFY rewriting, BTEQ command translation, and column-level lineage.

BTEQ Scripts FastLoad / MultiLoad FastExport / TPump QUALIFY Rewriting Teradata SQL Macros PRIMARY INDEX
Request a Pilot See What We Parse
⚡ Teradata → Snowflake Conversion
BTEQ script parsed — commands + SQL extracted
QUALIFY → ROW_NUMBER() OVER() rewriting
FastLoad/MultiLoad → COPY INTO / MERGE
Teradata-specific functions → cloud equivalents
PRIMARY INDEX advisory for clustering keys
BTEQ .IF/.GOTO → Python / Airflow control flow
Macros → parameterized SQL or stored procs
Column-level lineage report generated
5
Teradata Utility Types
100%
BTEQ Command Coverage
7
Modern Target Platforms
85%
Faster Migration
Source Coverage

Every Teradata artifact — BTEQ, utilities, SQL, and macros

MigryX uses a purpose-built Teradata grammar that understands BTEQ command syntax, session semantics, and Teradata-specific SQL extensions alongside standard DML.

⚡ Teradata Source Artifacts
  • BTEQ scripts — .IF/.THEN/.ELSE/.GOTO, .QUIT, .LOGON/.LOGOFF, .IMPORT/.EXPORT
  • FastLoad — CREATE TABLE, BEGIN/END LOADING, SESSIONS, ERRLIMIT
  • MultiLoad — IMPORT, DELETE, INSERT, UPSERT, UPDATE phases
  • FastExport — SELECT, EXPORT, FILE, SESSIONS, FORMAT
  • TPump — INSERT, UPDATE, DELETE, UPSERT, SERIALIZE, RATE
  • Teradata SQL — QUALIFY, EXPAND ON, NORMALIZE, SAMPLE, WITH BY
  • Teradata-specific functions — ZEROIFNULL, NULLIFZERO, OREPLACE, OTRANSLATE, HASHROW, HASHAMP
  • Analytic functions — QUALIFY + ROW_NUMBER / RANK / DENSE_RANK
  • PRIMARY INDEX definitions — UPI, NUPI, USI, NUSI
  • PARTITION BY RANGE and PARTITION BY CASE
  • Teradata Macros — parameterized SQL, multi-statement, access control
  • Stored Procedures (SPL) — procedural Teradata SQL
  • DBC system views — DBC.Tables, DBC.Columns, DBC.Databases, DBC.Indices
  • JOIN INDEX, AGGREGATE JOIN INDEX, HASH INDEX definitions
🌟 Modern Targets Generated
  • Snowflake SQL — COPY INTO, MERGE INTO, Snowflake Scripting, clustering keys
  • BigQuery SQL — Standard SQL, EXPORT DATA, partitioned tables, scripting
  • Databricks SQL — Delta Lake, COPY INTO, MERGE INTO, Unity Catalog
  • Azure Synapse — dedicated SQL pool, PolyBase / COPY, distribution keys
  • dbt models — incremental, snapshot, source definitions, generic tests
  • Python + Airflow — BTEQ control flow → DAGs, FastLoad/MultiLoad → connectors
  • PySpark — distributed DataFrames for large-scale Teradata data volumes
  • Column-level data lineage for all converted SQL and scripts
  • PRIMARY INDEX advisory — recommended clustering / distribution keys
  • Row-level validation suite generated for every converted script
Teradata-Specific Capabilities

The hard Teradata patterns — handled automatically

Teradata's proprietary extensions — QUALIFY, BTEQ control flow, utility commands, and Teradata-specific functions — require a purpose-built parser, not a generic SQL transpiler.

QUALIFY Rewriting

QUALIFY with ROW_NUMBER, RANK, DENSE_RANK, and other analytic functions — rewritten to equivalent ANSI SQL subqueries or CTEs that run on Snowflake, BigQuery, Databricks, and dbt models.

📤

FastLoad / MultiLoad → COPY INTO

FastLoad and MultiLoad utility scripts — including error table handling, SESSIONS, ERRLIMIT, and phase transitions — converted to COPY INTO (Snowflake/Databricks) or equivalent bulk-load patterns on each target.

🔄

BTEQ Command Translation

.IF/.THEN/.ELSE/.GOTO conditional logic, .IMPORT/.EXPORT file operations, .LOGON/.LOGOFF session management, and .QUIT return codes — converted to Python control flow or Airflow branch logic.

🔧

PRIMARY INDEX Advisory

PRIMARY INDEX definitions (UPI, NUPI) and SET/MULTISET semantics analyzed. Migration report recommends equivalent clustering keys, distribution styles, or partition columns on the target platform.

📋

Teradata Function Mapping

ZEROIFNULL, NULLIFZERO, OREPLACE, OTRANSLATE, NORMALIZE, EXPAND ON, HASHROW, HASHAMP, FORMAT/TITLE, and 200+ other Teradata-specific functions mapped to cloud-native equivalents.

📄

Macro & Stored Procedure Migration

Teradata parameterized macros converted to stored procedures, dbt macros, or Jinja2-templated SQL. SPL stored procedures translated to Python functions or cloud SQL procedures.

Migration Methodology

Three phases — inventory to production

1

Inventory & Complexity Scoring

MigryX scans your BTEQ scripts, utility files, and SQL objects — building a complete inventory with complexity scores and a migration effort estimate before any conversion begins.

  • Full script catalog — names, paths, line counts, utility types
  • QUALIFY / BTEQ command / FastLoad complexity flags
  • Teradata-specific function usage report
  • PRIMARY INDEX and partitioning strategy analysis
  • Prioritization: quick wins vs complex scripts
2

Automated Conversion

Each script is parsed to an intermediate representation and converted using Teradata-specific transformation rules. QUALIFY rewrites, utility command translations, and function mappings are applied automatically.

  • BTEQ grammar parses commands + embedded SQL together
  • QUALIFY → ANSI subquery / CTE rewriting
  • FastLoad/MultiLoad commands → COPY INTO patterns
  • 200+ Teradata function mappings applied inline
  • BTEQ .IF/.GOTO → Python / Airflow control flow
3

Validation & Handover

Every converted script ships with a row-level validation test. MigryX compares converted output against Teradata originals and confirms parity before project handover.

  • Row count and hash validation on all outputs
  • Schema comparison — columns, types, nullability
  • Column-level lineage: Teradata source → target
  • Unmapped items clearly annotated for review
  • Full delivery report with conversion coverage metrics
SQL Mapping

Teradata SQL → modern equivalents

Key Teradata SQL constructs MigryX rewrites automatically. Every conversion is documented in the lineage report.

Teradata Construct Snowflake BigQuery Databricks SQL
QUALIFY ROW_NUMBER()=1Subquery with ROW_NUMBER() QUALIFYSubquery with ROW_NUMBER()Subquery with ROW_NUMBER()
ZEROIFNULL(expr)COALESCE(expr, 0) / ZEROIFNULL()COALESCE(expr, 0) / IFNULL(expr,0)COALESCE(expr, 0)
NULLIFZERO(expr)NULLIF(expr, 0)NULLIF(expr, 0)NULLIF(expr, 0)
OREPLACE(str,s,r)REPLACE(str, s, r)REPLACE(str, s, r)REPLACE(str, s, r)
OTRANSLATE(str,s,r)TRANSLATE(str, s, r)TRANSLATE(str, s, r)TRANSLATE(str, s, r)
FastLoad → BTEQ tableCOPY INTO from stageLOAD DATA / COPYCOPY INTO (Delta)
PRIMARY INDEX (col)CLUSTER BY (col)CLUSTER BY (col)LIQUID CLUSTER BY (col)
DATE FORMAT 'YYYY/MM/DD'TO_DATE(str, 'YYYY/MM/DD')PARSE_DATE('%Y/%m/%d', str)TO_DATE(str, 'yyyy/MM/dd')
SAMPLE nSAMPLE(n)TABLESAMPLE SYSTEM(pct)LIMIT n (with RAND ORDER)
NORMALIZE ON MEETSDate range merging CTE patternDate range merging CTEDate range merging CTE
Migration Targets

Cloud-native platforms — idiomatic output, not literal translation

MigryX generates code native to each target platform — not a generic SQL approximation. Each target has its own transformation ruleset.

Snowflake SQL

COPY INTO for bulk loads, MERGE INTO for MultiLoad upserts, clustering keys from PRIMARY INDEX advisory, and Snowflake Scripting for BTEQ control flow.

BigQuery SQL

Standard SQL with partitioned and clustered tables, EXPORT DATA for FastExport equivalents, and BigQuery Scripting for procedural BTEQ logic.

Databricks SQL

Delta Lake COPY INTO, MERGE INTO, Liquid Clustering keys from PRIMARY INDEX analysis, and Unity Catalog object migration.

Azure Synapse

Dedicated SQL pool with distribution keys from PRIMARY INDEX, PolyBase / COPY for FastLoad equivalents, and partition schemes.

dbt Models

Incremental models for MERGE patterns, snapshot models for SCD, source definitions, and generic tests for data quality checks.

Python / Airflow

BTEQ control flow converted to Airflow DAGs with TaskFlow API. FastLoad/MultiLoad operators replaced by cloud-native connector tasks.

PySpark

Spark DataFrames and Spark SQL for large-scale Teradata data volumes — Databricks, EMR, or Dataproc. Structured streaming for near-real-time patterns.

Multi-Target

Generate output for multiple targets from a single migration run. Side-by-side comparison reports and parity validation across platforms.

Deployment

On-premise, air-gapped, and cloud-hosted options

Your Teradata source code never leaves your perimeter unless you choose cloud-hosted mode. All three deployment options are fully supported.

🏢

On-Premise

Deploy on your own Linux or Windows server. Source code, intermediate artifacts, and output remain within your data center. No outbound internet required during migration execution.

🔒

Air-Gapped / Classified

Full offline operation with no external dependencies. Suitable for classified environments, regulated industries, and organizations with strict network egress policies.

Cloud-Hosted SaaS

Browser-based access with enterprise SSO, role-based access control, and SOC 2-compliant infrastructure. Fastest time-to-first-migration — no installation required.

Pilot & Pricing

Start with a scoped Teradata pilot — results in days

Prove conversion quality on a real subset of your Teradata estate before committing to full-scale migration.

Pilot
Pilot
Fixed scope · 5–10 business days
  • Up to 50 BTEQ scripts or 500 SQL objects
  • QUALIFY rewriting demonstration
  • FastLoad / MultiLoad conversion samples
  • Column-level lineage report
  • Complexity and effort estimate
  • PRIMARY INDEX migration advisory
Enterprise License
Custom
Annual license
  • Unlimited projects and users
  • Multi-source, multi-target migrations
  • SAS, Informatica, Alteryx, ODI + Teradata
  • API integration for CI/CD pipelines
  • Custom transformation rules
  • SLA-backed support
  • Priority feature roadmap access
Contact

Start your Teradata migration

Our team has hands-on experience with large-scale Teradata migrations — including BTEQ estates with thousands of scripts, multi-terabyte FastLoad pipelines, and complex QUALIFY-heavy analytical workloads.

  • Free 30-minute discovery call
  • We review your Teradata estate and give a complexity estimate
  • Pilot delivering working converted code — not just a report
Schedule a Demo on Calendly →

What to bring to the first call:

  • Approximate BTEQ / SQL object count
  • Utility mix (FastLoad, MultiLoad, TPump)
  • Teradata version and target platform preference
  • Migration timeline and compliance requirements

Or email: hello@migryx.com

MigryX MigryX
  • BTEQ & Teradata SQL — fully parsed
  • QUALIFY rewriting & column-level lineage
  • On-premise, air-gapped deployment