MigryX parses every Teradata artifact — BTEQ scripts, FastLoad, MultiLoad, FastExport, TPump, Teradata SQL, macros, and stored procedures — and converts them to modern cloud-native targets with QUALIFY rewriting, BTEQ command translation, and column-level lineage.
MigryX uses a purpose-built Teradata grammar that understands BTEQ command syntax, session semantics, and Teradata-specific SQL extensions alongside standard DML.
Teradata's proprietary extensions — QUALIFY, BTEQ control flow, utility commands, and Teradata-specific functions — require a purpose-built parser, not a generic SQL transpiler.
QUALIFY with ROW_NUMBER, RANK, DENSE_RANK, and other analytic functions — rewritten to equivalent ANSI SQL subqueries or CTEs that run on Snowflake, BigQuery, Databricks, and dbt models.
FastLoad and MultiLoad utility scripts — including error table handling, SESSIONS, ERRLIMIT, and phase transitions — converted to COPY INTO (Snowflake/Databricks) or equivalent bulk-load patterns on each target.
.IF/.THEN/.ELSE/.GOTO conditional logic, .IMPORT/.EXPORT file operations, .LOGON/.LOGOFF session management, and .QUIT return codes — converted to Python control flow or Airflow branch logic.
PRIMARY INDEX definitions (UPI, NUPI) and SET/MULTISET semantics analyzed. Migration report recommends equivalent clustering keys, distribution styles, or partition columns on the target platform.
ZEROIFNULL, NULLIFZERO, OREPLACE, OTRANSLATE, NORMALIZE, EXPAND ON, HASHROW, HASHAMP, FORMAT/TITLE, and 200+ other Teradata-specific functions mapped to cloud-native equivalents.
Teradata parameterized macros converted to stored procedures, dbt macros, or Jinja2-templated SQL. SPL stored procedures translated to Python functions or cloud SQL procedures.
MigryX scans your BTEQ scripts, utility files, and SQL objects — building a complete inventory with complexity scores and a migration effort estimate before any conversion begins.
Each script is parsed to an intermediate representation and converted using Teradata-specific transformation rules. QUALIFY rewrites, utility command translations, and function mappings are applied automatically.
Every converted script ships with a row-level validation test. MigryX compares converted output against Teradata originals and confirms parity before project handover.
Key Teradata SQL constructs MigryX rewrites automatically. Every conversion is documented in the lineage report.
| Teradata Construct | Snowflake | BigQuery | Databricks SQL |
|---|---|---|---|
| QUALIFY ROW_NUMBER()=1 | Subquery with ROW_NUMBER() QUALIFY | Subquery with ROW_NUMBER() | Subquery with ROW_NUMBER() |
| ZEROIFNULL(expr) | COALESCE(expr, 0) / ZEROIFNULL() | COALESCE(expr, 0) / IFNULL(expr,0) | COALESCE(expr, 0) |
| NULLIFZERO(expr) | NULLIF(expr, 0) | NULLIF(expr, 0) | NULLIF(expr, 0) |
| OREPLACE(str,s,r) | REPLACE(str, s, r) | REPLACE(str, s, r) | REPLACE(str, s, r) |
| OTRANSLATE(str,s,r) | TRANSLATE(str, s, r) | TRANSLATE(str, s, r) | TRANSLATE(str, s, r) |
| FastLoad → BTEQ table | COPY INTO from stage | LOAD DATA / COPY | COPY INTO (Delta) |
| PRIMARY INDEX (col) | CLUSTER BY (col) | CLUSTER BY (col) | LIQUID CLUSTER BY (col) |
| DATE FORMAT 'YYYY/MM/DD' | TO_DATE(str, 'YYYY/MM/DD') | PARSE_DATE('%Y/%m/%d', str) | TO_DATE(str, 'yyyy/MM/dd') |
| SAMPLE n | SAMPLE(n) | TABLESAMPLE SYSTEM(pct) | LIMIT n (with RAND ORDER) |
| NORMALIZE ON MEETS | Date range merging CTE pattern | Date range merging CTE | Date range merging CTE |
MigryX generates code native to each target platform — not a generic SQL approximation. Each target has its own transformation ruleset.
COPY INTO for bulk loads, MERGE INTO for MultiLoad upserts, clustering keys from PRIMARY INDEX advisory, and Snowflake Scripting for BTEQ control flow.
Standard SQL with partitioned and clustered tables, EXPORT DATA for FastExport equivalents, and BigQuery Scripting for procedural BTEQ logic.
Delta Lake COPY INTO, MERGE INTO, Liquid Clustering keys from PRIMARY INDEX analysis, and Unity Catalog object migration.
Dedicated SQL pool with distribution keys from PRIMARY INDEX, PolyBase / COPY for FastLoad equivalents, and partition schemes.
Incremental models for MERGE patterns, snapshot models for SCD, source definitions, and generic tests for data quality checks.
BTEQ control flow converted to Airflow DAGs with TaskFlow API. FastLoad/MultiLoad operators replaced by cloud-native connector tasks.
Spark DataFrames and Spark SQL for large-scale Teradata data volumes — Databricks, EMR, or Dataproc. Structured streaming for near-real-time patterns.
Generate output for multiple targets from a single migration run. Side-by-side comparison reports and parity validation across platforms.
Your Teradata source code never leaves your perimeter unless you choose cloud-hosted mode. All three deployment options are fully supported.
Deploy on your own Linux or Windows server. Source code, intermediate artifacts, and output remain within your data center. No outbound internet required during migration execution.
Full offline operation with no external dependencies. Suitable for classified environments, regulated industries, and organizations with strict network egress policies.
Browser-based access with enterprise SSO, role-based access control, and SOC 2-compliant infrastructure. Fastest time-to-first-migration — no installation required.
Prove conversion quality on a real subset of your Teradata estate before committing to full-scale migration.
Our team has hands-on experience with large-scale Teradata migrations — including BTEQ estates with thousands of scripts, multi-terabyte FastLoad pipelines, and complex QUALIFY-heavy analytical workloads.
What to bring to the first call: