ALIGNMENT

ALIGNMENT •

Devyansh Lilaramani Devyansh Lilaramani

The Quantum Threat.

Issue 1

By Devyansh Lilaramani

 

Introduction

For over two decades, global digital infrastructure has relied on cryptographic systems such as RSA, elliptic curve cryptography (ECC), and AES, all built on the assumption that the mathematical problems underlying them are computationally infeasible to solve. The emergence of quantum computing challenges this foundation directly. With the development of algorithms such as Shor’s and Grover’s, the security guarantees that protect financial systems, communication networks, government databases, and critical infrastructure now face a measurable and time-bound risk.

Quantum computing and the erosion of classical assumptions

Shor’s algorithm demonstrated that integer factorization and discrete logarithm problems, which form the basis of RSA and ECC, can be solved in polynomial time on a sufficiently powerful quantum computer. Present-day machines lack the stability and qubit count required to compromise real-world cryptographic parameters, but experimental demonstrations of small-scale factoring confirm that the underlying method works in principle. Grover’s algorithm reduces the effective security of symmetric systems by providing a quadratic speedup for brute force key search. This does not break algorithms like AES outright, but it significantly narrows their safety margins.

A present threat created by long-term data value

The most immediate risk does not come from a future quantum computer that can break RSA in a single day. It comes from a strategic behaviour that is already occurring. This behaviour is often described as the harvest now, decrypt later model. Encrypted data intercepted today can be stored indefinitely. Once quantum hardware matures, this archived data may become readable, regardless of how secure it appeared when it was collected. Communications with long-term value, including diplomatic, medical, financial, and corporate records, are therefore vulnerable even if quantum computers remain limited in the near term.

Post-quantum cryptography as a structural response

Mitigating these risks requires replacing vulnerable public key systems rather than attempting to reinforce them. Post-quantum cryptography provides a family of mathematical constructions that remain secure against both classical and quantum attacks while still being deployable on existing hardware.

Several approaches have emerged as leading candidates:

Lattice-based systems.
Schemes such as CRYSTALS Kyber, standardized as ML KEM, and CRYSTALS Dilithium rely on the hardness of problems such as Learning With Errors. These systems provide strong security and practical performance, making them suitable replacements for RSA and ECC. They also offer relatively small key sizes and efficient operations, which allows them to integrate into existing communication protocols with minimal disruption.

Hash-based signatures.
SPHINCS Plus, standardized as SLH DSA, derives its security from cryptographic hash functions rather than number theoretic assumptions. This results in conservative and long term signature security that is less sensitive to unforeseen mathematical breakthroughs. Although signature sizes are larger than those of classical systems, their predictable behaviour and lack of hidden structure make them attractive for high assurance environments.

Code-based cryptography.
Systems such as Classic McEliece and the newly selected HQC rely on the difficulty of decoding general error correcting codes without structural information. They are robust and well studied, although some require large public keys. Their long history of surviving extensive cryptanalytic efforts provides additional confidence that they will remain secure even as quantum hardware progresses.

Multivariate schemes.
These systems rely on the difficulty of solving systems of quadratic equations over finite fields. Several candidates initially appeared promising, but work such as the successful attack on Rainbow demonstrated significant weaknesses. Continued research may yield more resilient variants, but the field currently demands caution until additional schemes demonstrate stronger resistance to real-world attacks.

The NIST standardization effort has created a clear pathway for global adoption by selecting algorithms that balance security requirements with performance and implementability.

The scale of the migration challenge

The transition to post-quantum systems presents a far more complex challenge than simply replacing one algorithm with another. Cryptographic protocols are deeply embedded within operating systems, industrial control systems, embedded devices, browsers, payment networks, and global communication infrastructure. Many devices cannot be updated easily. Others rely on long life hardware cycles that delay coordinated upgrades.

Effective migration will require cooperation between governments, standards organizations, software vendors, hardware manufacturers, and service providers. Without this coordination, even mathematically secure post quantum algorithms may not deliver the level of protection required.

Conclusion

Quantum computing does not yet possess the capability to break modern cryptographic standards, but its trajectory makes the long term reliability of classical systems increasingly uncertain. The demonstrated feasibility of quantum attacks, combined with ongoing hardware improvements and the persistence of stored encrypted data, introduces a clear and urgent need to transition toward quantum resistant standards.

Post quantum cryptography provides a viable foundation for future security. Its success, however, will depend on timely global migration and careful implementation. The stability of digital trust, spanning government systems, financial infrastructure and personal privacy, now rests on how quickly and effectively this transition is executed.

 
 
Read More
Devyansh Lilaramani Devyansh Lilaramani

On “Alignment”.

Issue 0

By Devyansh Lilaramani

 

Introduction


Alignment is one of those words that, at first, sounds deceptively simple: straight lines, order, and harmony. But its roots dig quite a lot deeper; to be aligned is to be attuned—to have intention and direction bound by some principle. It implies harmony but also discipline: things don't just align by some random accident; they are aligned through effort. In that sense, alignment is less about order and harmony, but about integrity instead.

In the field of artificial intelligence, the term has been borrowed to outline a goal: to keep machines consistent with human intentions and goals, ensuring that they don’t drift into harm or unpredictability. Yet the term, even framed as such, still carries remnants of its moral meaning. It’s not just about building systems that follow commands; it is about understanding the very nature of the commands themselves. We, as a society, use alignment to measure machines, but alignment is also the mirror by which we measure ourselves. Whether in reference to artificial intelligence or the world, alignment asks the same thing: are our abilities guided by our values or ambitions?

Alignment isn’t just a technical term; its meaning is moral. The real challenge is not making machines that are able to mimic humans, but how to make humanity worthy of imitation.

The Machine


AI alignment, in its simplest form, is the effort to keep artificial systems consistent with human intent—to keep them moral, in a sense. But the irony is hard to ignore. We demand machines to be ethically precise, while society excuses moral ambiguity. We feed our large language models data corrupted by human bias and then blame them for prejudice. The task of “teaching machines morals” feels almost comical when we, their creators, still do not have a concrete definition of what morality means. The threat behind AI doesn’t come from the technology itself; it comes from the people creating technology.

The Human


Humanity has always struggled with alignment. We say we want to progress, but we resist accountability. We praise empathy but design systems that reward indifference. The history of civilization is one long conflict between our abilities and intentions. Every tool we have built, from the printing press to the algorithm, has mirrored our curiosity and carelessness. The printing press spread knowledge, yes—but also propaganda. The internet connected billions and simultaneously divided them. Our inventions extended our reach, but they also revealed our faults. So when a machine discriminates, deceives, or distorts, it’s not malfunctioning; it is simply executing its task: getting ever closer to becoming human.

Alignment


To be aligned—truly aligned, I mean—is to bridge the gap between what we can do and what we should do. Alignment is no longer just an engineering riddle; it is a moral discipline. Alignment means coherence between ability and intention, between code and conscience. The future won’t be shaped by more intelligent machines; it will be shaped by wiser humans. That is the ultimate end goal of alignment: to ensure our conscience grows with our creations.

Conclusion


To conclude, alignment is not simply an end goal; it is a practice—a constant balance between possibility and principle. It asks us to remember not just what we innovate, but why we innovate. Our creations will learn whatever lessons we teach them, whether intentional or unintentional. So the task remains not to make them more human, but to make ourselves more humane.

This is why the Alignment Institute was created: to remind us that technology’s future is inseparable from our own—and that the real work of alignment begins, and must continue, with us.

 
 
Read More