RecodeX logo

Articles

What Claude Gets Right About COBOL Modernization — And What LLMs Still Can’t Do

In February 2026, Anthropic argued that Claude Code could reduce the cost of COBOL modernization. We tested it. Across four AWS Card Demo programs and an independent Swimm benchmark on Medicare COBOL, the same five failure patterns appeared consistently — silent data corruption, missing business logic, architectural substitution, unverifiable output, and fabricated values. These are not edge cases. They are structural consequences of how large language models work — and they explain why COBOL translation requires deterministic tooling, not statistical approximation.

Read More
Why "Documenting" Your COBOL with an LLM Doesn't Solve Anything

LLM-generated documentation of COBOL sounds like a useful first step in modernization. Read the code, get a plain-English description, hand it to developers. The problem is that documenting COBOL by inference fails the same way translating it by inference fails — the model fills gaps with plausible-sounding content that may not reflect what the code actually does. You've added a step, not solved a problem. This post explains why, and what deterministically-derived documentation looks like instead.

Read More
The Business Person's Guide to Legacy Application Modernization

Modernizing a legacy application requires understanding its four core components—Business Logic, Orchestration, Database Integration, and Reusable Components. This guide explores each component in detail and how target state decisions impact modernization effort.

Read More
Modernizing Legacy Applications: A Decision Framework

Modernizing legacy applications is a critical step for organizations seeking to improve performance, reduce costs, and remain competitive. There are 7 different potential paths to take for modernization — Retire, Replace, Rebuild, Rearchitect, Refactor, Rehost, and Replatform — each with distinct cost, time, and risk profiles. 
So we've created a quick decision tree and a plain-English description of each option to help you on your journey.

Read More
Evaluating Legacy Code Refactoring Methods: A CTO's View

Legacy code refactoring is a critical challenge for CTOs managing aging systems like COBOL. While manual refactoring offers high accuracy, it's costly and time-consuming, and generative AI lacks the reliability to handle complex legacy systems. Transpiler-based tools strike the right balance, achieving 85-95%+ accuracy while reducing cost and time. This article explores the six key pillars for CTO evaluation — Cost, Time, Accuracy, Equivalence, Risk, and Maintainability — so they can modernize with confidence.

Read More
Why LLMs Struggle with Legacy Code Refactoring and What’s Next

Neural Machine Translation (NMT) models, a specialized form of Large Language Models (LLMs), have revolutionized natural language translation. Inspired by this success, developers and researchers have explored applying these models to a different challenge: refactoring legacy code. The idea is compelling—automated, AI-driven refactoring of legacy code could drastically reduce the time and cost of modernization efforts. However, as we’ll explore in this post, there are significant limitations to using NMT models for this purpose. These challenges highlight both the unique complexities of legacy code and the inherent constraints of transformer-based architectures.

Read More
How Tertiary Language Transpilers Refactor Legacy Code

Tertiary language transpilers offer a robust solution for refactoring legacy code by introducing an intermediary abstraction layer between the source and target languages. Unlike traditional transpilers that rely on extensive, static rule sets and often struggle with the complexities of differing language implementations, tertiary language transpilers decompose source code into an Abstract Syntax Tree (AST). This AST captures the logical structure of the code in a hierarchical, language-neutral format, preserving the original functional intent. By mapping the AST to a tertiary language, these transpilers can more effectively handle implementation differences and adapt to evolving language features, resulting in more accurate and maintainable code transformations. This approach not only mitigates the limitations of static transpilers but also aligns with advancements in generative AI, which increasingly leverage intermediary abstractions to enhance model performance.

Read More
Why Refactoring Legacy Code Is Hard (And How Transpilers Help)

Refactoring legacy code like COBOL to modern languages is a daunting challenge due to differences in syntax, native capabilities, and implementation styles. Issues such as data type mismatches, global variable handling, and procedural constructs make manual refactoring costly and time-intensive. While static transpilers struggle with implementation differences, modern transpilers using tertiary languages bridge these gaps, achieving 85-95%+ accuracy. Discover how tertiary language-based transpilers are transforming legacy application modernization with faster, more reliable results.

Read More