Zerodha CTO Kailash Nadh: Having no investor pressure gives us the product edge

Zerodha’s chief technology officer (CTO) Kailash Nadh


Zerodha’s chief technology officer (CTO) Kailash Nadh, who has a PhD in Artificial Intelligence and Computational Linguistics, spearheaded the development of Kite, the company’s core trading platform and is leading the company into its tech future amidst rising competition. In an exclusive interview, he talks about using large language models (LLMs, or machine learning models that can understand and generate human language text from vast amounts of data) heavily to help with technical tasks, saving significant amounts of time. There is widespread decentralised innovation happening in AI technologies in the open-source world and there are new breakthroughs and improvements coming out on a weekly basis. Zerodha has been experimenting with self-hosting some of these open-source AI tools for making internal back office-related organisational tasks more efficient, he says. Excerpts:

Zerodha’s chief technology officer (CTO) Kailash Nadh

It’s widely believed that Artificial Intelligence (AI) will impact the job market, and there will be severe redundancies. At the same time, humans have the creative power to tide over such a situation, and they have done it in the past when the machines threatened their jobs. At this point in history, where’s the balance at?

AI technologies are multi-dimensional, unlike other technologies. For instance, a student, lawyer, researcher, writer, and a software developer can all use the exact same LLM tool to seek direct solutions to different kinds of problems in their respective areas. This is very different from how generic tools like word processors provide means to problem-solving. This time, I feel it is different, when even the very idea of creativity in the context of this new set of technologies has become a hot philosophical debate.

Of course, we have to consider common sense. In the name of automation and efficiency we cannot sacrifice the right decision. For instance, in matters of insurance claims. Relying on AI, to make high impact decisions isn’t a good idea yet, and those should remain with humans who are accountable for them. Some guard rails should be in place for critical areas and regulations on this are a global debate.

You had earlier said generative AI is a genuine breakthrough unlike most fads in tech. Why did you say that? Can you mention some of the tech that surprisingly turned out to be fads later?

These technologies work surprisingly well. Language, text, speech, imagery, videos, and tools powered by generative AI technologies have been commoditised in no time and have become widely available for daily use. Hundreds of millions of people use them directly on a daily basis. I personally have been using LLMs heavily to help with technical tasks, and they have been saving me significant amounts of time, which simply was not possible before.

Of course, there is plenty of hype surrounding these technologies, but there is significant substance underneath it as well. There are so many fads in technology. Remember blockchain, which was meant to revolutionize the world? Or ‘Big Data,’ which became a buzzword, where every organization was meant to reap untold benefits from massive amounts of data? What about 5G? It was meant to revolutionize everything from mobility to ‘smart cities’ and whatnot.

Are you one among those who were an AI sceptic who became an AI optimist? I remember an article you wrote a few years ago referring to snake-oil sellers pitching nonsense “powered by AI/ML”.

I am not an AI-optimist or an AI-sceptic. I was, and continue to be, an ardent sceptic of the vacuous “powered by AI” claim, where organisations used that phrase mindlessly in an attempt to distinguish themselves while not using any AI technologies at all or using some rudimentary form of it. With the recent breakthroughs and the commoditisation of AI technologies, anyone can easily integrate AI technologies and claim to be “powered by AI,” rendering the phrase itself meaningless.

How much of a fan are you of automation? Are there elements we shouldn’t leave to automation?

I have been writing software and building technologies and enjoying doing it for a very long time, professionally and personally. The significant majority of my work is writing software and automations that make lives simpler for humans, user-centric technologies that provide meaningful utility and quality-of-life improvements. Any sort of critical decision-making that impacts life or society, I wouldn’t leave fully to automation. For instance, service delivery to citizens, processing of insurance claims, etc. The responsibility for such critical decisions should lie with humans, who can be held accountable.

You have been instrumental in creating Kite, the company’s core trading platform. It’s known for its seamless user experience. Zerodha was the pioneer in the field but competition is catching up. How do you think Zerodha can retain its edge, considering tech has been commoditised in the field?

Two companies can use the same framework, programming language, same database but how they package it and finally give it to the customers makes a huge difference. I’ve seen many of our competitors pushing things to customers that are not in their best interests but make additional revenue for the company. Many of our competitors have devolved into pushing customers to trade more and make more revenue for the company. We don’t do that. If you open our app you don’t see any products or loans being pushed towards you. That’s our business philosophy.

We don’t have external investors. Companies who have raised VC funding have to answer their investors and all of that will show in how you package your product. We don’t have any investor pressures while our competitors have pressure from investors as they are all heavily funded. That’s our advantage and I would like to think that our edge is widening.

It’s clear that open-source has a definitive role to play in generative AI. How actively is Zerodha exploring these alternatives? Zerodha has also launched a dedicated $1 million annual fund to provide financial support to open-source projects globally.

There is widespread decentralised innovation happening in AI technologies in the open-source world. There are new breakthroughs and improvements coming out on a weekly basis. At Zerodha, we have been experimenting with self-hosting some of these open-source AI tools for making internal backoffice-related organisational tasks efficient. This has been working pretty well. With our newly launched Free/Libre and Open Source Software (FLOSS) fund, our goal is to extend financial support to critical Free and Open Source Software (FOSS) projects that are critical to the ecosystem. We have created a small dedicated team internally to run this initiative.

Can you give us an example of how Zerodha used AI tools to enable greater efficiency in the company?

Let’s take the transformation of the quality assurance process. Our team has been listening to tens of thousands of recorded customer calls for many years. This was a manual process that was soul-crushing for the team. We created a pipeline of calls and used Whisper, an open-source model, to convert voice to text. Then we used a locally hosted LLM to analyse this text on certain parameters. LLMs are used to analyse these transcripts now, and we are able to identify where quality parameters haven’t been met without having to resort to random sampling. This has resulted in an insane, exponential efficiency boost.

I’m learning a new language called Rust. Started working on a project in Rust and am building a reasonably complex software while learning the language from scratch with the help of LLM. I’ve made a reasonably well-written prototype in this language in a matter of hours that otherwise would have taken days. As a senior engineer, when I’m stuck with bigger problems, I feed the issue to an LLM, and it gives me solutions in 30 seconds, which otherwise can take 30 minutes. What’s happening now is unthinkable. Even when dealing with a complex engineering problem, LLM can suggest three approaches, and you can exercise your faculty to pick one.

What are the types of job roles that you foresee becoming redundant in India due to AI in the next few years? And why?

The most obvious candidates seem to be entry-level tasks where language comprehension and creation are involved. The low-hanging fruits seem to be programming tasks by junior developers, cataloguing and summarizing research material by research assistants, andcorporate writing and graphic design tasks.

Do you see engineering students now making a beeline for AI-related courses? What would be your advice to them?

I do, and I don’t think that is necessarily useful to a large number of students. “Big Data” courses were hot at one point too, remember? I don’t think countless students beelining for engineering courses was necessarily a good thing either. The only advice I can give students is to discover problems that they can relate to, and work on personal projects that solve them, gaining first-hand experience. Hands-on experience building technologies beats everything and accords significant edge.

(Note to readers: Aye, AI is a column that deals with Artificial Intelligence and its possibilities by engaging in conversations with the brightest minds in the field)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *