Your keyboard will now become an AI keyboard: Qualcomm’s KatouzianMutual FundYour keyboard will now become an AI keyboard: Qualcomm’s Katouzian

Your keyboard will now become an AI keyboard: Qualcomm’s Katouzian


Has the 5G ecosystem matured sufficiently for users and enterprises?

The G transitions traditionally are about 10 years apart. We (Qualcomm) tried to push the market and the technology to speed up the G(s). We launched 5G towards the end of 2019. A year later, it started to take off with data cards, following which phones helped it pick up momentum. Later, what helped was the embedded 5G capability in multiple devices and networks around the world—both private and public cellular networks; the movement towards vRAN (virtualized radio access network); and competition between multiple vendors to provide the best solution. While 4G is still in existence, the numbers are larger than the 3G volumes. But if you have IoT (internet of things) devices at home or in the field, you would not want a generation minus one (older) technology if you’re making sure it’s going to last for 10 years. We have a roadmap to help transition 4G into 5G with even more aggressive type of capabilities. I anticipate 6G to start rolling out somewhere towards 2029.

How are 5G networks and devices now leveraging AI, GenAI and XR?

For the longest period of time, AI was running in the background on devices—it would generate a better picture, a better video, detect malware and even false networks that were trying to access your device. AI was also improving your connectivity to the 5G network, detecting the type of data you were getting—whether WiFi or cellular —and transition between them. Now AI is in the foreground, completely changing the interface to your device with LLMs (large language models) and LVMs (large vision models).

Your keyboard will now become an AI keyboard and detect what you want to type; it will generate things for you, and your pen will become an AI pen. So, when you draw or sketch an image, it will generate an actual picture you can use. And the faster you process those things, the better. You can generate it (a picture) 10 times over until you get the image exactly like you want. You can embed that into your presentation or your storybook, and upload it on social media.

These use cases will only improve with time. And the interface to your PC or your phone will no longer mean launching an app or typing something or opening up something. You will just need to tell the device what to do and it will launch applications and draw things for you, and then present them to you. Your screen, too, will understand context. For example, while you’re using your laptop, you can tell the device: “I saw something on YouTube last week—a rare yellow bird. Can you bring that up back for me since I forgot where I saw it? Or you could generate and send your assistant to a Teams meeting, and it will take notes for you, bring action items, know whom to assign the tasks to, summarize everything for you; and even help you write a presentation.

This way of interfacing with a device will help you save time too—it will save your time in search and launching apps by concatenating all that data together, and thus becoming a change agent for people upgrading their devices and services.

We’re excited about these developments because we can now have a situation where edge devices like a PC or a handset, or a watch or XR glasses, can now ride on top of the AI wave—not only on the device itself, without any network connection which gives you the most amount of security, privacy, and personalization—but also create a hybrid situation where I reach out to the cloud if I don’t have enough capability on the device. That’s where 5G comes in—it will give you the best data throughput capability, the best round trip capability, and the least amount of latency.

Research firm Counterpoint expects about 1 billion GenAI smartphones to be cumulatively shipped between CY2024 and CY2027. But these devices will also require a different computing platform and software. How is Qualcomm gearing up for these numbers?

Today, when we embed AI models, we have (among other things) a hardware abstraction layer, and a framework that allows developers to directly write to our hardware. You can use a runtime like ONNX (the Open Neural Network Exchange is an inference engine capable of executing machine learning, or ML, models), or others (other platforms) provided by Meta, Microsoft and Google. Underneath that, we have interfaces that direct the data to our NPUs (neural processing units) inside the device that would give you the best type of acceleration—we can also route them to our GPU (graphic processing unit) and our CPU (central processing unit)—some models run better on a CPU, others run best on a GPU, while some models are best suited to an NPU.

We can also take a model and quantize it to miniaturize it and not lose accuracy. And we can also compress a model to fit within the memory constraints of a handset. For example, we can have a 7 billion parameter model and compress that into about 3.7 gigs (GB) of onboard RAM (random access memory). So, now people will sell the phones with the added value of more RAM (most phones today have 6-8 GB RAM), the idea being that you will have more capability to run things in parallel (like we do on desktop PCs)—for example, you can have multiple windows open. So, phones will shift from 12 to 16, to even 24 (RAM), which will make it easier for us to embed bigger and bigger models.

How will this work out in practice?

Let’s say, I wanted to do a travel booking. Today, you launch an airline app, a hotel app, a rental car app, and figure out where to go—all these things kind of get concatenated by you, and then you try to figure out where to go, and maybe even talk to someone on the phone. So, what can a 7 billion parameter model do for you? You can instruct the phone to say: book me this travel, and it’ll intelligently start to ask you questions and will figure out the apps to launch—all that stuff can get done with a few words of conversation—that kind of complexity can be embedded in the phone and continuously used.

Other things require less billion parameters. As an example, what do you do on a daily basis? You text, you write an email, you check location, you search for something—all of those things can happen with much smaller models. All those are available with our technology and software stack, and we provide all that capability to our OEMs (original equipment manufacturers). As AI improves, it will be the catalyst for people to upgrade in a shorter period of time versus a longer period of time—the lifespan of a PC could become much shorter—which will help you (companies) sell more devices.

The other important part is that you form a triangle between your device, the cloud, and yourself—whenever you need, you can access the cloud through fast 5G connections; wherever you don’t need to, you have your privacy and security on your own device, with bigger models.

On the laptop, we’re embedding 30 billion parameter models. We’re also working with Microsoft to embed their first-party apps and AI models, which will be embedded into the OS (operating system), following which the interface to the laptop will be completely different.

How will this strategy translate in India and other global markets?

For the first time, what Microsoft is going to put into the PC market will be compiled and built on the Snapdragon Elite X processor (capable of running GenAI models with over 13 billion parameters on-device). The PC market has provided a very big opportunity for Qualcomm to grow, given that about 200 million units or more laptops are sold annually (globally). We hope to capture at least 10% of that. But there are three reasons why I think we’re going to be successful. One is that we have a disruptive solution—it’s an ARM-based, high-performance CPU-capable device with a very capable GPU, and a leading edge AI embedded in any processor for the PC. We think it’s an inflection point in the PC where thin, light, long-lasting battery, and high performance capable devices with built-in AI capability will change the user experience. Second, we have long-term investment and learning from this (PC) market since 2016. And, third, we have a go-to-market partner in Microsoft to help us start running in this market.

Qualcomm is one of the companies that the ministry of electronics and information technology (Meity) is partnering with to strengthen India’s semiconductor ecosystem. What is your company bringing to the table?

We bring a lot to the table in terms of contributing to standards, and even governments are taking notice of us as being one of the leading fabless semiconductor companies in the world that also can enable manufacturing facilities to ramp up as fast as possible. We see the middle class growing in India, and with it, affordability is rising, and the country is progressing with advancements in technology. That’s why we have such a large presence here with about 17,000 engineers. We’re pushing our company to transition from a communications-based company to a connected computing company for the intelligent edge, and, and eventually, capability into the cloud. We have devices that can put inference (a model’s ability to make predictions from new data) capability into the cloud. And we’re working with multiple vendors to try to get that done, including server manufacturers and tier-one and tier-two data centre companies.

How are your XR applications shaping up, and how are they helping Qualcomm increase its market share in this space?

We started doing R&D (research and development) on XR-type solutions, 10-11 years ago. Since then, we’ve learned a tonne about what it takes to actually run these types of devices with perception algorithms because spatial computing is unlike regular computing where you type, give a voice command, or use your fingers to stretch and launch something. In this case (spatial computing), the headset tracks your eyes, your head, and your hands. It has degrees of freedom and when you look at something, it does a 3D reconstruction of that object or place. Hence, spatial computing becomes a different way of running all these algorithms. We’ve created a stack of perception algorithms that we’ve now embedded into a platform called ‘Spaces’. And we run an SDK (software development kit) and give it to the more than 5,000-plus developers that are running these applications and services. We, then, act on their feedback because you want the most performance at the lowest power. Because these glasses have to last for a long time, you want efficiency. And the large internet companies we have partnered with are helping us with content, services, and pushing that path forward.

Our biggest customer today is Meta. And we’re in a multi-year deal with Meta to try to figure out how VR (virtual reality) is now morphing into Mixed Reality. Meta is not in the phone business. But with (its platforms including) Facebook, WhatsApp, Threads and Instagram, they have more than 3 billion users. And they generate a lot of money from them through ads. When Apple kind of cut off that data stream, Meta took a dive (In April 2021, Apple gave iPhone users the option to opt out of apps tracking them, following which Meta said it could lose $10 billion in ad revenue). But the reason they came back strong is because of their AI inference capabilities and training capabilities. And the models that they produce, like their LLaMA-based models, have made directing advertising for their users better, and more accurate, over time.

Glasses are important because now Meta has a way of providing services to the number of users with an AI pair of glasses that now can see where you are, and upload things that can direct traffic to you, and also direct ads to you.

On the industrial side, these glasses are used in training, medical applications, in health and fitness, and also in the field where somebody is looking at, for example, a panel and debug it in real time because that data is coming to them. But volume boosts the market and attracts developers, and the volume really comes from the consumer side.

Do you see the form factors of XR glasses fitting even in, say, contact lenses anytime soon?

That’s probably too far of a stretch. But yes, glasses will start to change form factor, and the lenses will become more compact. Their actual lenses called pancake lenses allow for a thinner form factor. If you look at (Meta’s Oculus) Quest 3, you’ll see a sizeable reduction in thickness. They’re a bit more bulky than the regular Ray-Ban glasses but I’m sure in the next 2-3 years, lenses will become thinner and lighter. That’s why it’s so important for us to integrate things on the hardware side, because then we can get the best performance with very low power.

Disclaimer: Along with publishing our own news, we get news from various sources namely from news wires ANI, PTI, other reputed finance portals and individual journalists. We are not legally liable for any inaccuracies in the news and expect the reader to do their own due diligence.

http://ganesh@finplay.in

Finance enthusiast, Mutual fund expert.




Leave a Reply

Your email address will not be published. Required fields are marked *

Finplay

AMFI-registered Mutual Fund Distributor ARN-192179

Company

© 2024 Finplay Technologies Private Limited. All Rights Reserved.