Adopting generative AI into technology is potentially more significant than when the internet was introduced. It is disrupting most creative efforts and isn’t near as capable as it will be by the end of the decade.
Gen AI will force us to rethink how we communicate, how we collaborate, how we create, how we solve problems, how we govern, and even how and whether we travel — and that is far from an exhaustive list. I expect that once this technology reaches maturity, the list of things that have not changed will be far shorter than the list of things that were.
This week, I’d like to focus on three things we should begin discussing that represent some of the bigger risks of generative AI. I’m not against the technology, nor am I foolish enough to suggest it be paused because pausing it would be impossible now.
What I suggest is that we begin to consider mitigating these problems before they do substantial damage. The three problems are data center loading, security, and relationship damage.
We’ll close with my Product of the Week, which may be the best electric SUV coming to the market. I’m suddenly in the market for a new electric car, but more on that later.
Data Center Loading
Regardless of all the hype, few people are using generative AI yet, let alone using it to its full potential. The technology is processor- and data-intensive while it is very personally focused, so having it reside only in the cloud will not be feasible, mainly because the size, cost, and resulting latency would be unsustainable.
Much like we have done with other data and performance-focused applications, the best approach will likely be a hybrid where the processing power is kept close to the user. Still, the massive data, which will need aggressive updating, will need to be more centrally loaded and accessed to protect the limited storage capacities of the client devices, smartphones, and PCs.
But, because we are talking about an increasingly intelligent system that will, at times — like when it is used for gaming, translation, or conversations — require very low latency. How the load is divided without damaging the performance will likely determine whether a particular implementation is successful.
Achieving low latency won’t be easy because while wireless technology has improved, it can still be unreliable due to weather, placement of the towers or user, maintenance outages, manmade or natural disasters, and less than complete global coverage. The AI must work both online and offline while limiting data traffic and avoiding catastrophic outages.
Even if we could centralize all of this, the cost would be excessive, though we do have underused performance in our personal devices that could mitigate much of that expense. Qualcomm is one of the first firms to flag this as a problem and is putting a lot of effort into fixing it. Still, expect it will be too little and too late, given how fast generative AI is advancing and how relatively slowly technology like this is developed and brought to market.
Security
I was an internal auditor specializing in security and a competitive analyst trained in legal ways to penetrate security. I learned that if someone can get enough data, they can more accurately estimate the data they don’t have access to.
For instance, if you know the average number of cars in a company parking lot, you can, with reasonable accuracy, estimate the number of employees a firm has. You can generally scan social media and figure out the interests of the firm’s leading employees, and you can watch job openings to determine the kinds of future products the company is likely developing.
These large language models collect massive amounts of data, and I expect many of the things these LLMs scan in are or should be confidential. In addition, if enough information is collected, the gaps resulting from what’s not scanned in will be increasingly derivable.
This scenario does not apply only to corporate information. With the kind of personal information that is readily available, we’ll also be able to determine much more about the private lives of users.
Employers will be able to locate whistleblowers, disgruntled or disloyal employees, bad employee behavior, and employees who are taking advantage of the firm illicitly with greater accuracy. Protecting against a hostile entity deriving confidential information about you, your company, or even your government is becoming more viable with far greater accuracy than I enjoyed as either an auditor or competitive analyst.
The best defense is likely to create enough disinformation so that the tools don’t know what is real and what isn’t. However, this path will also make the connected AI systems far less reliable overall, which would be fine if only the competitor used those systems. However, it is likely to compromise the systems of the company that wants protection might use, resulting in a growing number of bad decisions.
Interpersonal Relationships
Companies like Mindverse with its MindOS and Suki with its employee supplementing avatars are showcasing the future personal use of generative AI as a tool that can present as if it is you. As we progressively use tools like this, our ability to determine what is real and what is digital will be reduced significantly, and our opinions of the people that use these tools will reflect more on the tool than on the person.
Imagine having your digital twin do a virtual interview, be the face of your presence on a dating app or take over for much of your daily virtual interactions. The tool will try to be responsive to the person interacting with it, it will never get tired or grumpy, and it will be trained to present you in the best possible light. However, as it advances down this path, it will be less and less like who you really are — and likely become far more interesting, attractive, and more even-tempered than you could ever be.
This will cause problems because, much like actors who date someone who has fallen for a character the actor once played, the reality will create subsequent breakups and a loss of trust.
The easiest fix would be to learn either to behave like your avatar or to use them for interactions with friends and co-workers. I doubt we’ll do either, but these are the two most viable approaches to mitigating this coming problem.
Wrapping Up
Generative AI is amazing and will significantly improve performance as it ramps into the market and users reach critical mass. Yet there are significant problems that will need to be addressed, including excessive data center loading, which should drive hybrid solutions in the future, the inability to prevent deriving secrets from these enormous language models, and a considerable reduction in interpersonal trust.
Understanding these coming risks should help avoid them. However, the fixes aren’t great, suggesting that we’ll likely regret some of the unintended consequences of using this technology.
+ There are no comments
Add yours