Earlier this year, India released its annual Economic Survey. Interestingly, the 2024-25 Economic Survey has a chapter titled ‘Labour in the AI Era: Crisis or Catalyst’. The Chapter takes a realistic stock of AI adoption trends and forecasts. It concludes that “estimates about the magnitude of labor market impacts (by AI) may be well above what might actually materialize.” Given the nascent stage of AI development and deployment, the National Economic Survey refrains from deterministically predicting the impact of AI on the labor market.
However, the survey poses an important question worth considering: “What were the problems in the world that demanded AI as the answer?” In other words, is AI a solution in search of a problem?”. This question is to be read in light of India’s unemployment crisis. The International Labor Organization’s India Employment Report 2024 revealed that the proportion of educated youth who are unemployed doubled from 35.2% in 2000 to 65.7% in 2022. The trend of AI adoption raises alarms about automating jobs, especially white-collar jobs. In October 2024, it was reported that Indian fintech company PhonePe laid off 60% of its customer support staff over the past five years as part of a shift to AI-powered solutions.
I can believe that in the short term. Especially if someone is raising money for Product X, they have a strong incentive to say “oh, yeah, we can totally have a product that’s a drop-in replacement for Job Y in 2-3 years”.
So, they’re highlighting something like this:
I think that it is fair to say that there is very probably a combination of people over-predicting generalized capabilities of existing systems based on what they see where existing systems can work well in very limited roles. Probably also underpredicting the fact that there are probably going to be hurdles that we crash into that we don’t yet know about.
But I am much more skeptical about people underestimating impact in the long term. Those systems are probably going to be considerably more-sophisticated and may work rather differently than the current generative AI things. Think about how transformative industrialization was, when we moved to having machines fueled by fossil fuels doing a lot of what had to be manual labor done by humans in the past. The vast majority of things that people were doing pre-industrialization aren’t done by people anymore.
https://en.wikipedia.org/wiki/History_of_agriculture_in_the_United_States
https://www.agriculturelore.com/what-percentage-of-americans-work-in-agriculture/
Basically, the jobs that 90% of the population had were in some way replaced.
That being said, I also think that if you have AI that can do human-level tasks across-the-board, it’s going to change society a great deal. I think that the things to think about are probably broader than just employment; like, I’d be thinking about things like major shifts in how society is structured, or dramatic changes in the military balance of power. Hell, even merely take the earlier example: if you were talking to someone in 1776 about how the US would change by the time it reached 2025, if they got tunnel vision and focused on the fact that about 90% of jobs would be replaced in that period, you’d probably say that that’s a relatively-small facet of the changes that happened. The way people live, what they do, how society is structured, all that, is quite different from the way it had been for the preceeding ~12k years, the structures that human society had developed since agriculture was introduced.
I’d agree that in the short term, AI is overhyped and in the long term, who really knows.
One thing I’ve always found funny though is that if we have AI’s that can replace programmers then don’t we also, by definition, have AI’s that can create AI’s? Isn’t that literally the start of the “singularity”, where every office worker is out of a job in a week and labourers only lasting long enough for our AI overlords to sort out robot bodies?
Well, first, I wouldn’t say that existing generative AIs can replace a programmer (or even do that great a job at assisting one, increasing productivity). I do think that there’s potentially an unexplored role for creating an LLM-based “grammar checker” for code, which may be a larger win in doing debugging work that would normally require a human.
But, okay, set that aside – let’s say that we imagine that we have an AI in 2025 that can serve as a drop-in replacement for a programmer, can translate plain English instructions into a computer program as well as a programmer could. That still doesn’t get us to the technological singularity, because that probably involves also doing a lot of research work. Like, you can find plenty of programmers who can write software…but so far, none of them have made a self-improving AGI. :-)
I agree with you, it was more of a commentary on “what would happen if we had AGI tomorrow”.
We’ve been 3 months away from AGI for a few years now and it’s debatable if we’ll ever get there with LLM’s. Looking into the results of AI tests and benchmarks show that they are heavily gamed (tbf, all benchmarks are gamed.) With AI though, there’s so much money involved, it’s ridiculous.
Fortunately it looks like reality is slowly coming back. Microsoft’s CEO said that something like “AI solutions are not addressing customer problems.” Maybe I’m in a bubble but I feel like overall, people are starting to cool on AI and the constant hype cycle.