Rendered at 02:43:03 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
cortesoft 5 hours ago [-]
I feel like their are (at least) three main critiques of AI, and I wish we could debate them separately, because I think they each have different resolutions.
The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income.
The other two critiques are trickier. The first is the environmental impact of AI, and the response is difficult. Doing work to make it more efficient, and continuing to develop cleaner energy sources is paramount. Taxing and efficiency requirements might be a start. We have the technology to produce energy in sustainable ways, but it is expensive. It has to be non-negotiable if massive energy usage for AI is to continue.
The last is the REAL conversation, and I don’t know the answer. How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
I guess there is another issue, related to the last one, which is how do we deal with the ability to use AI to mislead and commit fraud at scale. How do we deal with not being able to trust what actually said/done by a human and what is AI pretending to be human? How do we avoid and mitigate the ability for AI to generate a massive amount of custom content that is used to mislead and defraud people? So much of our current mitigation strategy relies on the assumption that it takes a lot of effort and time to do certain things that can now be done instantly thousands of times?
troosevelt 5 hours ago [-]
If you lost your $60,000 a year job due to this, do you really believe a basic income funded by it will make up that loss? It won't. Basic income in the US is usually proposed at $12k per year, which would add another $3 trillion to the budget. Do you think you can even get that just taxing these companies? I don't.
People who bring up basic income need to get serious about the numbers involved because I never see it. It's not a realistic solution.
omikun 1 minutes ago [-]
People complain UBI doesn’t make mathematical sense doesn’t realize our current economy doesn’t make mathematical sense either. All this prosperity we in the developed world get comes at the cost of extracting wealth from the rest of the world and all government taking on ever more debt.
ip26 4 hours ago [-]
If companies are faced with the choice between:
- employ you at 60k/yr
- replace you with a machine that costs a lot of money, and also send you UBI of 60k/yr
It should be obvious the latter is not an option that is ever going to happen.
JeremyNT 3 minutes ago [-]
The solution to the subsequent devaluation of labor, and ability for tech oligarchs to pocket the cash instead, will not be found in capitalism.
Unless we are all to become serfs, a new way to distribute resources needs to be on the table.
UBI is a salve, offered to keep victims of the system out of abject poverty. It is too little, too late.
xboxnolifes 3 hours ago [-]
What if the machine in this context is 3x as productive as you?
mschuster91 4 hours ago [-]
The problem is, companies will go for the third route: hire a company in India to launder AI. It has already worked out once with the offshoring wave.
bluefirebrand 4 hours ago [-]
This will still wind up with them paying UBI eventually
DaSHacka 34 minutes ago [-]
For the Indians?
ggsp 4 hours ago [-]
Fair warning: I’m quite ignorant in terms of economics, so this is a naïve way of looking at it.
The question that always pops up for me when it comes to UBI applied to the current capitalist system: even if you did actually come up with the money somehow (which is a pretty huge if as you say), once everyone has X “base money” per month, doesn’t that mean the cost of living (specifically renting) will rise to match this new “base”?
andriamanitra 2 hours ago [-]
The cost of living would certainly rise somewhat but the point is that UBI is redistributive: the same absolute amount to everyone raises low incomes by a larger percentage than high incomes. Long term effects are hard to predict but in the short term it would mean the poor doing slightly better while the middle class is slightly worse off. The non-working (owning) class would be mostly unaffected as assets are insulated from inflation.
Another factor to consider is that putting more money in the hands of people in need of <thing> means producing <thing> becomes more profitable and thus more investment and resources are directed towards <thing>. If we assume the economy works the way the proponents of capitalism say it does, this should eventually drive the cost of living back down.
But personally I think the biggest benefit of UBI would be the reduction in number of people who are desperate enough to accept work – both legal and illegal – that is unfairly compensated, inhumane and/or immoral. The existence of that class of people is the driving force behind many societal problems. Exorbitant amounts of resources are wasted treating the symptoms of those problems instead of fixing the root cause.
5 hours ago [-]
hashmap 5 hours ago [-]
You never see it how. Like in terms of raw resources or political will?
troosevelt 5 hours ago [-]
I mean the numbers. 12k per year is peanuts. You cannot live off that and to do it we'd be nearly doubling the budget (that's old data, it's probably not that portion of the budget anymore).
That 12k doesn't include healthcare, it doesn't include a lot of things. It's basically ensuring that people live well below poverty level, and for what? I just don't get how the numbers work, even if it was politically feasible.
I'd much rather have free healthcare and other amenities other countries have. Here in the US if you lose your job there is virtually nothing between you and the streets besides family and friends.
I'm facing this right now. I cannot get a job in tech which means restarting my career. Getting a job right now is not easy in any field especially not in anything like a living wage. If I did not have my parents I would be on the streets right now, thankfully I don't have a mortgage or anything like that. I'm not sure how much $12k per year would really help, it certainly wouldn't pay for housing.
It's rough out there.
animegolem 5 hours ago [-]
And even if you did get the 60k and never can find work again are you gonna be happy about the next door neighbor working for 120k and getting his 60k on top?
site-packages1 5 hours ago [-]
Well I can tell you that I work 40+ hours a week and am very unhappy my neighbor has a more expensive house than me. Someone should do something!
abakker 4 hours ago [-]
All the proposals I’ve seen would set the marginal tax rate on the 120 so high that his earnings would end up more like 40k from the 120k job and then he gets his 60. So, still some benefit to working, but a very progressive tax rate on higher earnings. Not sure I agree with this, but that is what I’ve seen.
Aurornis 4 hours ago [-]
Your neighbor would get $60K UBI but their tax bill would go up by $80K because the government needs tax revenue to pay the UBI.
For high levels of UBI it’s not possible to get all of the necessary tax revenue from taxing billionaires or corporations or other simplistic ideas that sound good unless you do math.
stale2002 4 hours ago [-]
> do you really believe a basic income funded by it will make up that loss? It won't.
Almost definitionally it would. If society is saving a bunch of money on all that saved labor, that extra value is still there, it just needs to be appropriately redistributed
bobsmooth 4 hours ago [-]
>Do you think you can even get that just taxing these companies?
If we go back to a 60% corporate tax rate, for sure.
Aurornis 4 hours ago [-]
You could put a 100% tax on revenues (not profit) of AI companies and it would come out to a low couple hundred dollars per person per year right now.
A 60% corporate tax rate wouldn’t get to the levels needed for UBI proposals either.
what 2 hours ago [-]
They’ll just find a way to have $0 of profit. You have nothing to tax.
pydry 5 hours ago [-]
Just as hyperloop was designed as a techbro pie in the sky notion to kill high speed rail, basic income as an idea is designed to kill more realistic attempts to shore up welfare, e.g.
* A job guarantee like we had during the great depression
* Lowering retirement age
* Raise minimum wage
* Expanding medicare to everyone
It's worth remembering that if AI really can do everyone's jobs then it'll be wildly deflationary so there's no need to worry about pesky government spending on this stuff or paying people more. Spend spend spend, baby!
Ah youre worried it cant do that? Maybe it is mostly smoke and mirrors then.
spwa4 4 hours ago [-]
So the problem with 3 out of 4 of your challenges is that, right now, it means young people need to work more to achieve them. Money is an issue, but money by itself cannot solve it, it really needs to be backed with more people working. That's not going to happen, in fact, less people will work.
So without AI, the path forward is obvious: those 3 will become worse. Lowering retirement age, raising minimum wage, and expanding medicare won't happen without AI. They can't.
We already are reasonably close to a job guarantee. If unemployed people would accept any job, unemployment would drop by a lot. Not to zero, obviously, but a lot. Unemployment is also pretty low by historical standards, so fixing unemployment with a job guarantee can't fix much. We'll need something else.
> It's worth remembering that if AI really can do everyone's jobs then it'll be hyperdeflationary so no need to worry about pesky government spending on this stuff.
So yeah, I disagree. If you're going to assume AI will just jump to how capable it'll be 100 years from now, then you need to think a bit deeper. What AI effectively does, it provides capital-based labor. You buy a robot. Robot costs a lot, but operational expenses are marginal, energy and (maybe) "tokens". Add solar power, and let's say local AI becomes a thing, at least for normal robots, and you need nothing other than the initial cost of the robot.
Okay, so this will mean everything can be staffed with tens of thousands of these robots. Remote mine? No problem. 500 robots in your house? Why not. Cleaning very large facilities? Not a problem. Farm hundreds of square kilometers? Fine. Dig a canal to avoid the strait of Hormuz and just do it with shovels? Let's get to it. AI can be a universal machine that can do anything labor can achieve.
Obviously AI will massively increase the output of the economy, and people will figure out what to do with that, as people will want a shitload of things done. Which means the problem you're identifying will be trivial to solve, and we'll figure something out.
mschuster91 4 hours ago [-]
> Obviously AI will massively increase the output of the economy, and people will figure out what to do with that, as people will want a shitload of things done. Which means the problem you're identifying will be trivial to solve, and we'll figure something out.
Historically, that "we'll figure something out" has usually meant the economical wipeout of large parts of the population, sooner or later followed either by some epidemic event or other "act of god" (like fires) that was a consequence of squalor and poverty, or by some sort of war to thin out the herd.
I'd prefer if history would not repeat itself for once.
spwa4 3 hours ago [-]
> Historically, that "we'll figure something out" has usually meant the economical wipeout of ...
Uh, historically everything has usually meant the economical wipeout of large parts of the population. It still means that in most third world countries. Economic power is not the huge differentiator here.
fluoridation 5 hours ago [-]
Job guarantees and higher minimum wages are just UBI with extra steps, while lowering retirement age is just conditional UBI by another name. If you're giving people more money in exchange for nothing (or nothing of any value to anyone, as in the case of a job guarantee), it's effectively indistinguishable from UBI.
JumpCrisscross 5 hours ago [-]
> Job guarantees and higher minimum wages are just UBI with extra steps, while lowering retirement age is just conditional UBI by another name
The extra steps reduce costs and encourage offsetting production. Those are important steps!
pydry 5 hours ago [-]
"When our grandparents built the hoover dam, the lincoln tunnel and the triborough bridge with a job guarantee that was just money for nothing - UBI with extra steps."
^ this would be an accurate representation of your opinion then?
fluoridation 4 hours ago [-]
That job guarantees exceptionally produce useful things doesn't mean that they don't overwhelmingly produce useless things, or things that are more expensive than they're worth.
JumpCrisscross 4 hours ago [-]
> doesn't mean that they don't overwhelmingly produce useless things, or things that are more expensive than they're worth
One could say the same thing about all the little art projects a hypothetical society on UBI might busy itself making. The pertinent difference seems to be one about scale and co-ordination. Job guarantees say we work together–through a centralised power–to build big things. Handing everyone cash leans more towards arts and crafts and consumption.
fluoridation 4 hours ago [-]
>Job guarantees say we work together–through a centralised power–to build big things. Handing everyone cash leans more towards arts and crafts and consumption.
Creating busywork doesn't strike me as a particularly worthwhile endeavor, compared to idleness.
guzfip 5 hours ago [-]
$12k a year is plenty. You’ve just been raised above your natural standard and will have to take a while to be deprogrammed from your “lifestyle expectations”.
happytoexplain 5 hours ago [-]
This is one of the most horrifying comments I've ever read on this website. It's practically a dare to engage in civil war or violent revolution. People fundamentally experience life as relative - as changes. You can't "deprogram" intrinsic human nature. You can just wait 80 years for everybody who's not used to the new hell to die.
4 hours ago [-]
4 hours ago [-]
omikun 8 minutes ago [-]
You mean 12k a year with free housing and free health insurance?
troosevelt 5 hours ago [-]
Have you lived on 12k?
24k puts you near poverty level. $1k per month will cover food expenses, it won't cover transport, shelter, and certainly not medical. On 12k per year you have enough money for food and praying that an emergency doesn't happen. It's hard enough living on 40k, and I'm not even in a place where costs are expensive.
krapp 4 hours ago [-]
UBI will never happen in the US so it's a pointless argument. Americans will have plenty of pawn shops and short-term loan services to help them, though.
hackable_sand 3 hours ago [-]
I'm literally doing it right now
It is kinda funny to see you guys petrify at the thought of people living in poverty, pretend you care, and then use us as a political foil in your useless debates.
Where's the money you owe us?
happytoexplain 2 hours ago [-]
How is not wanting to live in poverty using the poor as a foil? How is it hypocritical/fake to care about people who are in situations that I don't want to be in? Isn't that just logical?
bobthepanda 5 hours ago [-]
“Let them eat cake,” or whatever.
Telling a bunch of people they should accept being poorer has always worked out historically.
infamouscow 4 hours ago [-]
I've only been slightly joking about starting a company that sells rope and guillotines.
JumpCrisscross 5 hours ago [-]
> $12k a year is plenty. You’ve just been raised above your natural standard
I get where you're coming from. But this is politically unworkable, and for good reason. If AI increases productivity, that means more wealth, which means living standards should go up.
AshleyGrant 5 hours ago [-]
> $12k a year is plenty. You’ve just been raised above your natural standard
> I get where you're coming from.
You do? Have you priced out health insurance lately? I have. Insurance on HealthCare.gov for my partner and I would be $1700/month for what amounts to catastrophic coverage. It had around a $20k deductible and covered nothing other than an annual physical prior to hitting the deductible.
With $2k/month to work with between us, I guess we have to somehow find a place to live and eat on the remaining $300 as we pay for our functionally worthless health insurance since there is no way in hell we could afford to pay the deductible.
JumpCrisscross 4 hours ago [-]
Their numbers are wrong. But their fundamental argument, I believe, is degrowth. That we are living beyond our means and need to lower our expectations of living standards to live sustainability. It's a philosophically-appealing argument. It's also wrong, unless you're comfortable with the inevitable violence and likely population destruction that would need to ensue from an honest degrowth agenda.
smeej 5 hours ago [-]
It didn't even occur to me that this might not be sarcasm until I read the other comments. Still fighting to hold onto that assumption.
5 hours ago [-]
Eupolemos 5 hours ago [-]
These years, knowing what is tongue-in-cheek can be very difficult.
Many of us see the current US administration as being either real life modern nazis or heavily influenced by such.
So I was wondering; are you being serious?
CodeCompost 5 hours ago [-]
You basic income is 12k? Congratulations, your rent just went up 12k a year.
jazz9k 5 hours ago [-]
This is the part most people don't understand or intentionally ignore. It will accelerate inflation and 12K will be worth even less than it is now.
The natural progression of this is always government price fixing, which always ends up in complete destruction of the economy.
jazz9k 5 hours ago [-]
"lifestyle expectations"
$12k might be nice in parts of Asia, but when the average rent is $1200/month, it doesn't go very far anywhere in the US.
Lerc 4 hours ago [-]
Like the post above says that there are multiple issues at play with AI. The same can be said about universal income.
The pay levels are not comparable because you are also recompensed with time. You may choose to spend your time in a number of ways that you find rewarding that also reduce your expenses. Making your own meals, clothes, furniture, beer, wine etc. There are a lot of people who would enjoy doing these things but are too time poor to do so.
Your expenses also reduce by the amount you must spend in order to make yourself available to work. Travel, work clothes, medical certificates when sick. You can spend a lot in order to be paid.
If you want a world with a reasonable distribution of income levels. It stands to reason that those receiving more right now should receive less. Certainly, the absolute wealthiest should reduce the most, but on a global scale, it is hard to defend that those in the top 10% of incomes should retain their position.
The proposal for how much a universal income should pay is a variable to be argued itself. I can certainly see it being argued for at a lower level than ultimately desired since something is better than none.
In a sense the end state of a universal income in an equitable world would be remarkably simple. The income available divided by the world's population,
Those reviving more than their share now may not be happy about it, but I'm not sure they have a right to their larger portion either.
Aurornis 4 hours ago [-]
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income
Every call for UBI should be qualified with two estimates:
1) How much money you think UBI will pay out
2) How much money you think the tax will generate
Creating a UBI program with AI taxes sounds like a clean solution to something until you do any math.
If we estimate today’s AI revenues across all the big providers at $100B annually (a little high) and divide by the population of the US, I get around $24 per month per person.
So a 100% tax on AI plans would allow us to give UBI of about 80 cents per day.
Even 10X the revenues wouldn’t make bring that to parity with UBI expectations. A 100% tax would also be an incredible gift to foreign AI companies that could offer similar services for half the price to everyone else in the world.
cortesoft 4 hours ago [-]
This is based on the assumption that AI is going to take all our jobs. If this is true, than as more jobs are absorbed by AI the revenue would increase.
Aurornis 4 hours ago [-]
You’re assuming that this AI will be in the same taxable jurisdiction as the people whose jobs were replaced.
The work that is most replaceable by AI is work that is mostly digital. That work most easily moves to another country.
When the work is replaced by AI you can relocate it to another country much more easily than when you have to relocate workers.
zozbot234 49 minutes ago [-]
The main critique of AI is that it's a dumb hallucinating parrot. It can't do genuine human quality work at all, outside of extremely narrow domains like basic translation and copyediting. Even for Q&A, while it can be useful by quickly accessing a huge storehouse of learned knowledge, the vulnerability to hallucinations means that human expert verification will always be required.
pj_mukh 4 hours ago [-]
I don't think the last two critiques are good critiques at all. The environmental impact is a function of our energy sources not energy uses. Complaining about energy and water when we have infinite energy beamed down to us surrounded by a planet that is 70% water seems silly.
And AI "Ikea-fies" art and creativity. It doesn't get rid of it. Of course you can get a generic table from IKEA, but for a real unique piece, you need to go to a real artist. Always.
The real main critique is for AI jobs that are a one-to-one replacement, your taxi driver, your dock worker etc. I don't think UBI is a viable solution (I used to) but nothing replaces the community and status that a real job gives you. This is going to be a tough one.
ashley95 4 hours ago [-]
> The first is the fear of job loss, and I feel like this is the most straightforward to deal with.
In the same way that it was straightforward to deal with job loss from the industrial revolution, or when the US shipped away all its manufacturing capability?
cortesoft 3 hours ago [-]
I mean, kind of? It was fairly straight forward, and unemployment and poverty continued to decrease as those events occurred.
foogazi 4 hours ago [-]
> The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income.
How much UBI you want from this AI tax ?
I don’t think they’d give me what I want
TrevorFSmith 3 hours ago [-]
I think you're missing one of the major reasons people are against "AI": the jerks at the top. When obviously nefarious people are lining their pockets and not bothering to even pretend to care about the people around them, it's no surprise they're hated.
oytis 5 hours ago [-]
Universal basic income is not an adequate replacement for a good career. Universal unconditional prosperity might be one, but it's not clear whether AI can really do that.
operatingthetan 5 hours ago [-]
I think you may be going too far, as in your critiques assume the tech is further along than it actually is. There are three fundamental problems for mass AI adoption/AGI:
1. Lack of memory/continuity
2. Lack of agency
3. Lack of self-awareness
Based on my understanding of the basic 'loop' of an LLM, solutions for these may be decades off or not possible. Which leads me to the fourth problem:
4. Lack of compute
To get anywhere near AGI we need massive context windows. The whole thing is a mess.
neonstatic 3 hours ago [-]
I think people really confuse their imagination and expectations with reality. There's so much talk about AGI and mass layoffs. Then there is my experience.
I was talking to Claude and ChatGPT, trying to fix an issue with a simple function in Rust, which is returning a boolean depending on day of week and time of day. The logic looked ok to me, but tests were failing. Notably, my real world data derived tests were succeeding, while brute-force/comprehensive tests written by Claude were failing. I wanted those "just to be sure". Both Claude and ChatGPT were spinning their wheels, introducing fixes, then undoing prior fixes, so on and so forth. They also updated tests. We were going from one failure to another, while they confidently reassured me that "this is the fix", they found the "crucial bug" etc. etc.
Turned out my logic was correct from the beginning. My tests were correct. Claude's tests were broken. I realized this by writing my own brute force test. Just a simple loop with asserts and printlns to see what is failing. I did what the machine was supposed to do for me. In less than 5 minutes I fine tuned the test to actually check what it was supposed to be checking and voila. The "fast" thinking machine episode took me 2 hours and only produced frustration. Sorry I should learn to speak the language - AI reduced my development velocity :)
The only poverty I see coming is from collapse of quality after these dumb machines are used to replace people, who actually know what they are doing.
operatingthetan 33 minutes ago [-]
And if the current models really are so great, why do we need to have a massive hype-train for each time the number goes up 0.1?
SpicyLemonZest 4 hours ago [-]
All three of these problems are thoroughly solved by widely available tools.
operatingthetan 4 hours ago [-]
They are? Is your LLM ready to run your organization without further input from you or anyone? Do you realize that "memory" requires eating your hilariously small context window?
Have you not had a discussion with Opus where it insists it is correct about something it is objectively wrong about for several turns?
4 hours ago [-]
SpicyLemonZest 4 hours ago [-]
That seems like an unreasonably high standard. I like to think that I have memory, agency, and self awareness, but I'm not ready to run my organization without further input from anyone.
> Do you realize that "memory" requires eating your hilariously small context window?
I do! LLMs are structured differently than humans, so the component we call "memory" corresponds to what humans call "short-term memory"; practical long-term memory for an LLM looks much more like what a human would call "let me write this down". But you can and commercially available systems do load it into context on demand when it's needed for some problem or another.
operatingthetan 4 hours ago [-]
>memory, agency, and self awareness
The LLM only currently has the illusion of these things. Hence the bubble.
I know that you (or anyone) as a human being don't have the illusion of these things.
This is not like the car replacing the horse for transportation. The LLM as-is cannot fundamentally replace the person. They require the agency of a human to take turns at all, and even more so to enact change in the world.
Your LLM does not actively engage in the world because it does not experience anything. It only responds to queries. We can do a lot with that, but it's not intelligence. It can't say oh hey SpicyLemonZest, I was thinking and had an idea the other day. Because it has nothing between each query.
sumeno 5 hours ago [-]
[flagged]
operatingthetan 4 hours ago [-]
A personal attack is not necessary. You don't seem to understand my perspective at all, please read some of my other comments.
ambicapter 5 hours ago [-]
> Doing work to make it more efficient
Making it more efficient will probably >>increase<< the total energy devoted to AI, not reduce it. See Jevon's Paradox.
JumpCrisscross 5 hours ago [-]
There is also a likeability problem. Altman and, shockingly, to a lesser degree, Musk have terrible brands. When folks see those people at the top of these companies, folks who have been publicly saying they're going to cause massive job losses and cause human extinction or whatnot, they're going to hate the companies irrespective of the actual risk of job losses or environmental impacts.
throwatdem12311 4 hours ago [-]
Why does Dario get off the hook here? He also comes off like a greasy asshole 99% of the time.
happytoexplain 3 hours ago [-]
Virtually no "normal people" know who he is. I don't think most programmers I know even know who he is. They just know "Altman" and "Anthropic".
JumpCrisscross 4 hours ago [-]
> Why does Dario get off the hook here?
I'm curious for metrics, but Dario strikes me as being less perpetually online. Given equal time, they may each be unlikeable. But they don't put themselves out there equally–Sam and Elon are unable to focus on their work. (I'll admit I've had a soft spot for Dario since he stood up to Hegseth–maybe I'm just not seeing the equal hate he's getting.)
richardw 3 hours ago [-]
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income
Problem for jobs is that there are 200 countries and all the earnings will go to a few. Universal basic income for everyone? Or just the US?
Who gets to keep their house locations in a new fair world? The person whose parents bought in the right place 50 years ago? Who pays the money these models earn, if nobody clicks ads or does a job? What is income for if we don’t work and can just ask the AI for everything we want?
What happens when the super smart AI comes up with “better” (more fair, consistent, etc) answers than you think you have to questions like the above? What if they end up socialist? Do we force it (and invite risk it escapes and fights us for the greater good) or give in to the presumably more thorough reasoning?
schoen 5 hours ago [-]
The concern I hear the most (which I don't think is common among the general public) is the existential risk one (that an AI may be created that drastically exceeds human intelligence, and that it may accidentally be incentivized to take actions that destroy most or all of human civilization).
JumpCrisscross 5 hours ago [-]
> concern I hear the most (which I don't think is common among the general public) is the existential risk one
Altman and friends' "stop us before we shoot grandma" PR tour in 2023 and '24 is largely the cause of this AI backlash. If you tell everyone you're building something that will kill us all, you will scare up investors. But you'll also turn the public against you. In truth, we have zero evidence of the alignment problem to date in the existential form. Instead, it's the usual technology enabling bad actors stuff.
SpicyLemonZest 5 hours ago [-]
The "alignment problem" as traditionally understood assumed a different path to AI development, where the best AIs wouldn't primarily operate on a substrate of human language. If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem, and it seems no less likely today than it did in 2023.
JumpCrisscross 4 hours ago [-]
> If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem
That's massively moving the goalposts on what counts as "an existential problem." The original framing was not economic dislocation but actual existence, i.e. existential. This new framing is a retreat to a way-of-life argument.
And I'm still calling baloney! The "AI will kill us all" argument backfired on Altman et al, so now we have an "it'll take over all the jobs" pitch. But it's all smoke and mirrors for investors. We have no good reason to expect current AI methods will lead to an AGI that can not only do most human labour, but do so economically competitively.
SpicyLemonZest 3 hours ago [-]
I don't understand how you can consider the AI industry to be in any sense retreating from prior claims. The existential problem remains an active near-future risk; you're hearing a lot about the jobs problem because it's already here, now, today. Do you not remember how much less capable AI systems were in 2023, and how implausible it seemed that they could become as good as they are now without new theoretical breakthroughs?
retired 5 hours ago [-]
Needing less offices, less people driving to those offices, less A/C and heating for those offices and less resources building those offices could offset the energy usage of AI.
calgoo 4 hours ago [-]
We can just turn all the office buildings into datacenters, they already look like heating vents! cover them in solar panels on the outside to cover the windows, and done!
cortesoft 5 hours ago [-]
The people still need to be somewhere, so while commuting could be reduced I am not sure about heating/cooling usage.
4 hours ago [-]
neonstatic 3 hours ago [-]
Remote work accomplishes all that as Covid days proved.
happytoexplain 4 hours ago [-]
No, you're backwards. The first point is definitely the most important and most tricky.
UBI is a dangerous distraction in this context. It's a mammoth cost to achieve an impoverished quality of life. It may be worth implementing in general, but it absolutely must stay out of the conversation about AI. It's like if the ruling class started announcing that they would like to imprison us all, and your "discussion" about the problem revolved around how we can make our future jail cells feel as nice as possible.
We are allowed to regulate businesses. We simply don't.
cortesoft 4 hours ago [-]
What sort of regulation do you think is needed for this?
SpicyLemonZest 3 hours ago [-]
I think frontier AI research should be outlawed until such time as there's a broad consensus on how society ought to deal with it. This would have to be coordinated internationally to be effective, but I think that would be achievable if the US sent a credible signal by forcibly shutting down any one of the major labs.
cortesoft 2 hours ago [-]
Even supposing we could somehow get the political will to do this, how would you write such a law? What counts as “AI frontier research”? How would you write a regulation around that that isn’t trivial to bypass without banning general computing itself?
SpicyLemonZest 2 hours ago [-]
As I said in a sibling comment, we're fortunate that training modern AIs requires large quantities of specialized compute. We just have to restrict GPU sales and outlaw GPU farms. I don't deny that it would be a seismic, controversial change, but I don't think it's terribly hard to implement if we can reach a consensus that we want to implement it.
neonstatic 3 hours ago [-]
This is never going to happen. Is something can be done, it will be done.
happytoexplain 2 hours ago [-]
>If something can be done, it will be done.
What does this mean? It's obviously false on its face.
SpicyLemonZest 3 hours ago [-]
There were historical worries about whether a ban would be feasible, but frontier AI research as we understand it today requires large amounts of specialized compute. Even if we couldn't or wouldn't destroy the chips, we could imprison anyone who tries to start a large training run, the same way we imprison anyone who tries to buy enriched uranium.
Tyrubias 5 hours ago [-]
I understand your points, but I think what scares people is that the solutions you propose are disregarded by our politicians. At least in the US, both politicians and the large donors funding them seem to be more and more allergic to anything resembling an universal basic income, and they do their best to scare people away with fearmongering about “communism”. The US is also doing a hard U-turn away from environmental protection and is trying to frame environmental conservation as radical and harmful. Other countries might be doing better on these fronts, but it’s definitely not a good sign that the US doesn’t seem to be on board with your first two solutions.
In the more immediate run, I think the concern is that AI will reduce the ability of workers to collectively bargain and thereby grant the wealthy oligarchs even more control over their workers’ lives.
cortesoft 5 hours ago [-]
I completely agree that governments and power brokers will disregard these solutions unless forced.
However, they will also disregard any attempt to slow down or halt AI progress in general, so it isn't like the people wanting to end AI in general are any more likely to succeed than those wanting to do what I propose.
I personally feel my suggestions would be slightly more feasible to gain support for than trying to stop AI completely. The power brokers in control of AI currently certainly aren't going to stop developing and pushing AI, but they might be convinced that sharing the wealth is the only way to avoid massive revolt in the long run. While it is conceivable that the wealthy wouldn't need the masses for labor like they do now in the AI future, they still need to not be killed in a massive uprising when 90% of the population is unemployed and starving. While I know a lot of people think the plan is just to kill off that part of the population, that is not that easy to do even with an army of AI robots, and would likely be cheaper and easier to just share a bit of the productivity. I don't think it will be trivial, but I don't think it is impossible.
JumpCrisscross 5 hours ago [-]
> politicians and the large donors funding them seem to be more and more allergic to anything resembling an universal basic income
UBI has been a major donor priority, at least on the left.
spwa4 5 hours ago [-]
In my opinion the main, and really only, issue: AI is a necessity. Everything from war (including defense departments), to jobs, to rental advertisements, to food packaging, to restaurant reviews, to news, to education, to programming, to architecture, to politics ... will have to change due to AI. Not changing them is not really an option. Everything needs to be figured out here.
A lot of this will both cost money AND require people to change their jobs, their investments, their equipment, ... And they hate it.
Everyone, including governments will have to adapt.
And to add insult to injury, everything comes from the US and it's really expensive.
synecdoche 4 hours ago [-]
UBI drives inflation. All other effects follow from that.
cortesoft 4 hours ago [-]
I am not sure if inflation will work exactly the same in a world where AI/robots do all the work.
Inflation is driven by scarcity. More demand for a fixed/limited resource drives up the price. Historically, every good and service humans bought followed this pattern, so we didn’t even have to consider an alternative.
Already in our current economy, however, we have seen a good portion of our economy shift to things that do not have this characteristic. For example, take something like a video streaming service. The marginal cost for additional demand is small enough to be almost negligible; if everyone in the world decided they wanted a Netflix subscription, there wouldn’t suddenly be a shortage of streams or a run on episodes of The Great British Bake Off. They would have to build more datacenters, but the cost per additional user is tiny compared to almost every other traditional good that came before.
If AI and Robots start doing all work, then this would spread to more of the economy. The increase in productive capacity would severely reduce the limitations that have historically driven inflation. We obviously have to invest in building robots and AI, but once we have enough robots they would be making more of themselves and we would be limited by natural resources, but we could use robots to get more of those, too… and we could focus on clean energy, since we would have plenty of robots to do that work, too.
bdangubic 3 hours ago [-]
USA will never have UBI, period. So any idea that includes any mention of is an absolute non-starter. Outside of the USA, perhaps, but for us that is never happening.
paganel 5 hours ago [-]
> How do we treat AI creative work?
We erase it and call out the ghouls “creating” that shit, simple. They deserve being called out for creating shit and poisoning our minds.
keybored 5 hours ago [-]
> The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society.
This is straightforward? This is a colossal task. Monumental. Billionaires own it. That’s the political status quo. You could build something to counter those centers of power. But from what base?
Well-paid software developers have scoffed at or been ignorant of worker organizing for, maybe forever? But I have good paycheck and equity... Now what?
cortesoft 5 hours ago [-]
'Straightfoward' as in there is a clear way to solve the issue, not that it will be easy to enact it.
happytoexplain 3 hours ago [-]
But it's not a clear way to solve the issue. UBI, even if enacted tomorrow, doesn't stop the enormous crash of the middle-class, and the fallout of that. Maybe it will stop some people from literally dying - that's "solved"? It's a small buffer at the very worst end of a gigantic problem. The word "solve" is totally ridiculous.
jiggawatts 5 hours ago [-]
> mislead and commit fraud at scale
This is the "safety" messaging that OpenAI and Anthropic keep harping on and on, and on about, while whistling a merry tune as they turn around and sell AI to the US military and worse, to the tune of $billions/year already.
The "and worse" needs elaboration, because fundamentally the single biggest cash cow for AI vendors will be (and maybe already is) implementing a dystopian future where everything we say, type, or do will not just be recorded but also: read, analysed, and cross-correlated by unfeeling heartless machines tasked with keeping us in line.
I'm not being paranoid, President Biden said as much, but only in reference to China. If you think only China has motivation to use AI to keep a lid on dissent, I have a bridge to sell you. And if you think the Land Of The Free(tm) will never abuse AI in this manner, well... I have some bad news. You may want to sit down.
Here in Australia, the cyberpunk dystopia is already starting to be rolled out. A customer of ours asked their IT team to hook up a variety of HR-related information sources to their new AI system tasked with making recommendations for hiring, promotion, and demotion.
Welcome to 1984, citizen.
dualvariable 3 hours ago [-]
Yeah, AI-enabled surveillance capitalism is likely to be every bit as bad as what people imagine China is doing with their social credit scores.
And the scary thing is that you can probably easily sell it to Democratic voters if you track racism scores for people, so you can filter people out of your dating pool or job/rental applications. Most people don't care about privacy as a fundamental right, and they'll roll over and compromise if you give them a way to track what they hate. You just need to make sure it is "bipartisan" and it'll be wildly popular.
watwut 5 hours ago [-]
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income.
The very same CEOs are extremely against social support, any taxes for themselves and any govermental agencies that help or protect people.
How is can this be possibly easiest in the world of Thiel, Musk, Trump, Vance, Palantier and overtone window moving toward economically conservative for years.
sofixa 5 hours ago [-]
There are a few other issues.
Like copyright. All modern LLMs are built on troves of copyrighted material that was used in their training. AI companies are claiming this is fair use, while pretty much all of the copyright holders would strongly disagree. This is going to get litigated for years, but regardless of what various legal systems decide, morally, people can be against this.
And people are already sick and tired of AI-generated content being used to replace human made content, be it on Spotify or TikTok. This is part "AI replacing humans", part "I'm being scammed by lower quality content".
cortesoft 5 hours ago [-]
I feel like this is covered by the last question about how we deal with AI and creative works.
MBCook 5 hours ago [-]
And we’ve seen the cases of people trying to use the AIs to train new AIs!
OpenAI: We’re allowed to steal everything to train our AI and you can’t complain
Developer: Ok, I’ll use your AI to train mine
OpenAI: NO NOT LIKE THAT, UNFAIR
contingencies 4 hours ago [-]
Picasso famously said "Computers are useless, they can only give you answers."
You can't put things back in the bag. Perhaps the true underlying social problems are:
1. There's too many humans and not enough jobs.
2. The capitalist system only rewards profit seeking and cost externalization.
3. Our democratic representation myth is dead and buried.
4. Even in the developed world, middle-class security is gone.
So here's my question: given the current global system has failed and is clearly in its death throes, as a pan-national species how can we transition to a less mono-focal economic rationalism driven means of governance and self-organization without turning in to an autocracy or reinforcing negative nationalist bloc-level thinking that will tie us in to the same old human-thump-human stone age ape-ism and environmental cost externalization?
Perhaps AI can help in areas like improved education, improved media, proposals for improved government process or process transition for enhanced efficiency. Enforce transparency and accountability in the halls of power by reducing human process and corruption. Public auditable decision making and public auditable oversight. It's at least potential grounds for partial optimism. The best I can summon under present conditions. Of course, we want to avoid a dystopian global AI autocracy, the technocratic basis for which we have already well established, but if you view the present system as a dystopian human autocracy with the same technocratic basis (an increasingly rational perspective given recent events), then it starts to look more rosy.
Rekindle8090 5 hours ago [-]
[dead]
Devasta 5 hours ago [-]
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income.
If the Epstein class wouldn't go for something like this in a world where they needed workers to produce, the idea that they will when we are surplus to requirement is inconceivable.
cortesoft 5 hours ago [-]
I should have said 'most straightforward', rather than easiest, because I agree it will not be easy to make it happen.
Dwedit 4 hours ago [-]
I think you left out the part about AI being a plagiarism machine.
rescripting 5 hours ago [-]
The AI CEOs have been screaming for years now about how AI is scary, you should be afraid of it and it’s going to take your job.
“Mythos is too dangerous to release.”
“OpenAI offers a bounty if you can get ChatGPT to teach you how to do a bioterroism.”
“Agentic agents will replace entire categories of jobs. They’ll just be like, gone”
This is all signaling to their customers; no not you on their $20/month plan, the governments and corporations of the world who have deep pockets, fat to trim, and borders to defend and expand.
It’s no surprise that people don’t like AI. It’s not for people.
Tyrubias 5 hours ago [-]
This was evident everywhere except within the AI industry itself. The rhetoric from many of the industry’s top leaders has been “this technology will eliminate millions of jobs, fundamentally reshape countless other jobs, and automate the use of lethal force, but we’re going to develop it anyways”. Many of the current economic woes, including mass layoffs, have been blamed on AI by the very executives conducting said layoffs. In addition, the major AI companies have shamelessly stole intellectual property to train their models and shoveled AI down everyone’s throats. Is it any wonder that the general public hates AI? The AI industry isn’t exactly doing its best to appear likable.
monksy 4 hours ago [-]
Roy Sutherland has a really good take on AI. Most of the AI companies are targeting a cost cutting proposoition where they should target a value creation one. Targeting and pushing towards a regressive elimination route is tox and destructive to those around it.
Then again the CEOs of these companies want to get their company at all cost to society.
ncouture 4 hours ago [-]
The title of the original article feels like click-baiut to me. It's covering an act of violence under the pretext that people hate AI.
In fact it's a very sad story about a 20 year old throwing their life away instead of fighting for what he believes is right through non-violent activism and/or regulations.
Last year I wrote an article asking the very question "Who will be the next Luddites?", National Geographics followed-up months later. I'm sure many before, after or in-between covered the same topic. There is truth to it, we will be impacted but let's not forget we went through this during the industial revolution and we should be better equipped than ever to fight using meaningful non-violent acts and operations.
Non-violent means don't work and get you killed by cops. This is what the people are left with.
atmavatar 3 hours ago [-]
Non-violent protests do work, though they require you hit a critical mass to become effective. There even exists a 3.5% rule[1] in political science whereby authoritarian governments will topple if 3.5% of the population engages in nonviolent protest.
One of the more famous examples here in the US is that of the equal rights marches in the 1960s ultimately leading to the end of segregation.
What I'm not sure of, though, is what kind of impact there is on the required percentage of people participating when we have media outlets like Fox News, which was demonstrated to have fabricated images during events like the Black Lives Matter protests to make them look as if they were violent.
MLK Jr.'s Civil Rights protests are an obvious counterpoint to this claim.
esalman 3 hours ago [-]
If you get killed by cops that does not necessarily mean the means are not working. All good things in life come at a sacrifice.
collingreen 3 hours ago [-]
It doesn't necessarily mean it is working either though.
Not all sacrifice needs to be all or nothing.
globalnode 4 hours ago [-]
but violence doesnt work either, even if you conquer a whole nation (or social class or insert w/e here), you didnt really win and oneday they will get their revenge, so you're better off trying the non-violent way
slg 3 hours ago [-]
Does anyone else see the disconnect between how Americans talk about our history compared to how we talk about political violence of today?
How can we glorify Thomas Jefferson and teach kids about him saying "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants" only to then condemn the spilling of any modern blood? Truly what is the difference between torching a warehouse of toilet paper compared to tossing some tea in the harbor?
How can we condemn one and celebrate the other without being hypocrites?
collingreen 3 hours ago [-]
Propaganda and "history is written by the victors"
Propaganda is the difference between rebels and freedom fighters.
hackable_sand 2 hours ago [-]
I don't think anyone should be glorifying Jefferson.
You could have written L'Overture instead and it would have been a great example.
cindyllm 2 hours ago [-]
[dead]
Balgair 3 hours ago [-]
So, at my BigCo, this rings very true.
We've tried to internally pitch many ideas to the larger organization before but mostly got nothing back.
Finally, one of the various board members talked to my boss and told them that, essentially, it has to be top line growth, not bottom line savings.
We looked this up and it came down to some MBA mumbo-jumbo about how X% of growth is better than that same X% of savings once you run the math (?). Look, I know, that's not how percentages work and I know that savings actually do matter. But in 'I have an MBA-land' the mantra is topline > bottomline.
So, then we started to pitch ideas around growth (new lines, more customer sales, more customers, etc). Which went ... nowhere ... again.
Time goes by again, and another helpful person reaches out and tells us that our ideas are 'not worth considering' as they 'don't meaningfully impact revenue targets'. Again, essentially, just to justify the salary-time that these internal boards spend, the idea has to be net positive. Then it we learned that, no, it has to impact the revenue to 1%. For our BigCo that in the ~$10M ballpark. We do have the customer base to support that, but it is in the revenue ballpark of Atari or the Hypixel servers.
Look, either way, the run-around that I get told is that for AI projects that we pitch internally: 1) Top line growth only 2) ~1% increase in revenue (~$10M).
Now, why anyone would not just go take that ~$10M idea and not just make a company themselves is beyond me, but I don't get paid the big bucks, so who knows.
Still, that is what these BigCos are looking for: Growth in the ~$1-10M range.
autaut 5 hours ago [-]
I think tech ceo got a little bit too excited and their mask fell off and started saying “oh yeah, you don’t like it? Too bad nothing you can do about it”. You’ll see them quickly backpedal to woke 1.0 when it turns out they were a bit too quick about it.
Gigachad 4 hours ago [-]
And when people showed up at Sam Altmans house.
z3c0 5 hours ago [-]
Not to mention, it doesn't actually create the productivity promised at the lower rates promised. The most enthusiastic proponents are middle-management, not actual doers.
It's an expensive route to mediocrity, which doesnt offer an edge in a market where everyone is using the same snakeoil.
the_snooze 4 hours ago [-]
They got way over their skis on this one. There's a difference between "impressive" tech vs. "operational" tech. That difference usually boils down to prioritizing engineering rigor over marketing.
EA-3167 4 hours ago [-]
Unreliable mediocrity, because you simply can never be sure when the damned thing lies/hallucinates unless you double-check everything.
So now you're wrangling an "AI" system and you're doing most of the work you would have had to anyway. ...And when you don't it can get really embarrassing.
Not the first time, surely not the last. The problem is that so much money is tied up in this thing, and the moment the music stops the bag holders are going to be utterly doomed.
girvo 1 hours ago [-]
> the bag holders are going to be utterly doomed
Good news, the plan is for us to be the bag holders as they rush to IPO.
yoyohello13 5 hours ago [-]
I’m frankly more shocked that people in the industry are surprised the general public hates them. Like it’s been non-stop fear mongering and hype from them for years, AI has basically done nothing to improve the lives of normal people, wtf did they expect?
SpicyLemonZest 5 hours ago [-]
They didn't expect anything else and aren't surprised. "The X industry is discovering..." is one of those stock phrases that people just kinda deploy willy-nilly; the article contains no argument that anyone in the AI industry didn't know or didn't expect this.
emp17344 4 hours ago [-]
Presumably these companies on the verge of an IPO don’t want the public to hate them or their product. It wasn’t exactly a calculated maneuver - they made a decision to leverage fear-based marketing and it backfired.
SpicyLemonZest 3 hours ago [-]
It wasn't any sort of maneuver! It's what they genuinely believe! Both OpenAI and Anthropic have been telling people about the existential risk of powerful AI since the very day they were founded; OpenAI has been at it since 2015, 8 years before they had any meaningful product to market.
Sam Altman still says, after being the victim of anti-AI violence, that "the fear and anxiety about AI is justified" and "it will not all go well".
People simply refuse to believe that AI companies are serious about this, and get twisted into knots trying to understand why AI companies would choose this messaging under the premise that they can't be serious.
operatingthetan 5 hours ago [-]
If they are in a cult that shuns outside opinions, it could be a surprise when they find out...
threethirtytwo 4 hours ago [-]
People hate admitting the truth. Everything you said here is utter bs that people say to lie to themselves.
I’m sick and tired of AI hatred without people facing the truth. People hate AI because AI is on a trajectory to replace them and become better than a human. That is the fundamental reality.
Look don’t get angry at me. If you are on HN chances are you’re most likely delusional and completely wrong about AI. The majority of HN called vibe coding useless and said LLM have no potential. Now my company won’t even hire someone who hasn’t used Claude and I haven’t touched a text editor or ide in half a year. Same with the teeming hordes of experts on HN who said driverless cars will never come. All wrong. People on this site need to stop jumping on these band wagons of stupidity and pointless blame games.
Can we talk about that rather than blame corporations for being what they’ve been since before AI? Yeah corporations are psychopaths and corrupt and nobody cares. Same story till the end of time. We are on a cusp of a paradigm shift and your skills as a programmer are about to be utterly trashed because an AI is on trajectory to dominate your skills.
Face reality.
Starman_Jones 1 hours ago [-]
>> People hate AI because AI is on a trajectory to replace them and become better than a human. That is the fundamental reality.
Let’s explore this fundamental reality a bit. The “and” necessitates both parts of the clause be true, but would people hate AI if it became better than them but didn’t replace them? That’s an easy no; Big Blue and AlphaGo didn’t cause mass hatred, and machines have been broadly better than humans in some capacity for centuries - that’s literally why we build machines.
Would humans stop hating AI if it replaced them, but wasn’t able to become better than them? Again, no. So the second piece is both incorrect and unnecessary, and what we’re left with is “People hate AI because it’s on a trajectory to replace them,” which is accurate, but not exactly revelatory; many people have already come to this same conclusion, including in this very comment thread. So the good news about your face reality line is that you’ll find a lot of people already facing that direction alongside you.
threethirtytwo 36 minutes ago [-]
>but would people hate AI if it became better than them but didn’t replace them? That’s an easy no;
Yes they will. Jealousy. But they'd never admit it. What are you proud of? What skill do you value and identify yourself with? Say AI did it 1000x better than you but some law was in place to prevent it from replacing you. You'd love that law, and you'd make up some excuse to to hate AI.
>Big Blue and AlphaGo didn’t cause mass hatred
Excuses. Just think a little rather then finding some obvious surface level reasoning that fits within your own bias. First nobody hates those things because it's only a select niche that takes pride in their chess or go skills. Those people will hate alphago if alphago was a direct challenge to their identity as a player. But laws are in place to prevent that as in tournaments only allow humans. Why are such laws in place? Because go and chess are just games. They produce no intrinsic value so it doesn't hurt the bottom line if you restrict AI in that case.
This isn't the case for programming and any other field out there that can be replaced by AI. Ai will be directly attacking a business skill you use to pay the rent and it is currently challenging my identity as a programmer. And laws to restrict this will be actively fought against because monetarily and utility wise there is actual real world benefits to AI.
But why do I even need to spell this out to you? You're not mentally deficient. You're not stupid. All of this is obvious. Why do I have to literally tell you why your example is biased when it is OBVIOUS. It's because you're lying to yourself. You subconsciously avoided the obvious reasoning above. You chose convenient rationale to fit the narrative YOU want. Nobody hates "alphago" lol, did you see that koreans guy face when alphago fucking dominated his ass? Come on bro.
That is the reality. And you are denying it. When there's two people in disagreement and one of them is lying to themselves... how do we know which one it is? The lie is so convincing that both people believe in it.
I'll tell you the best way to determine this. The best way is to see which persons reasoning aligns with their identity and biases. Which person is constructing a logical scaffold that is optimistic? Because lies are told to cover up the horrors of reality. Guess what? I'm a programmer. I hate AI. But I cannot lie to myself. You? Probably made up all kinds of lies about how you're not afraid of AI taking over your job cuz AI can't do this... or that... or whatever bs to help you sleep at night.
nmeagent 3 hours ago [-]
> Now my company...
Which company is that? Do let us know so I can make sure to never be your customer.
3 hours ago [-]
threethirtytwo 3 hours ago [-]
lol you probably already are our customer. If you tell me your identity I’ll do you an even bigger favor and ban you from our company.
2 hours ago [-]
hackable_sand 3 hours ago [-]
The public also hates the lies and the threats about the tech
salawat 3 hours ago [-]
>People hate admitting the truth. Everything you said here is utter bs that people say to lie to themselves.
Stares at poster silently from a lotus position waiting for the enlightenment lightbulb
>I’m sick and tired of AI hatred without people facing the truth. People hate AI because AI is on a trajectory to replace them and become better than a human. That is the fundamental reality.
Nu-bie, come, sit, be silent & reflect. When was the last time a tool was made that truly replaced the wielder? Without the wielder, a tool is nothing, without the tool, the wielder still strides as a beacon of divine potential.
>Look don’t get angry at me. If you are on HN chances are you’re most likely delusional and completely wrong about AI.
Continues staring in silence awaiting the moment of enlightenment
>Now my company won’t even hire someone who hasn’t used Claude and I haven’t touched a text editor or ide in half a year. Same with the teeming hordes of experts on HN who said driverless cars will never come. All wrong. People on this site need to stop jumping on these band wagons of stupidity and pointless blame games.
Nu-bie. Does the man disappear because the machine exists? Or is he redirected according to his nature? What nature consumes a man abandoned by his tribe? Surrounded by hoarders of the necessities & means of life? Reflect on this. Reflect also on the potential capabilities of a group of people that through attention to detail, great patience, and acts of artifice on behalf of their fellows once enabled the animation and thinking of rocks. Think very carefully about this.
>Can we talk about that rather than blame corporations for being what they’ve been since before AI? Yeah corporations are psychopaths and corrupt and nobody cares. Same story till the end of time. We are on a cusp of a paradigm shift and your skills as a programmer are about to be utterly trashed because an AI is on trajectory to dominate your skills.
The corporation is as a cup. It's direction is controlled and agency guided by men. It is the oldest form of AI, with us for hundreds of years. The only thing keeping it in check being the occasional times of great strife during which generations of men wrestle the beast, to remind ourselves of wherein our problems truly originate.
>Face reality.
Nu-bie, it is time for you to resume your chores. You have not been enlightened.
dominotw 4 hours ago [-]
not to mention all the AI boosters seems to have the most hatable scammy personalities. why are they all so smug.
magnet for scum like boosters on X, middle managment types, linkedin ai influences, ppl making fake videos on facebook.
rvz 4 hours ago [-]
Not a surprise. Seems like AI is more hated than crypto and this shows that the AI industry is in a bubble.
At least crypto does not take away more jobs than it creates, where as we all know AI takes away more jobs and no-one can give a solution or explain what the "new jobs" are.
Because the value from AI is to automate the jobs from humans. Claiming otherwise is being intellectually dishonest. Same goes for defining "AGI".
YZF 3 hours ago [-]
Crypto sucks energy and creates no value. It's complete and utter speculative garbage that also destroys the planet.
AI has real value. We can argue about whether the cost is worth the value, whether we're on an exponential improvement curve or not, whether it ends up creating jobs or destroying jobs, but AI is mind blowing science fiction that nobody would have believed you will exist 10 years ago.
binyu 4 hours ago [-]
> At least crypto does not take away more jobs than it creates
Except sometimes when there's a huge black swan event, or when the bubble pops. Such things can result in significant layoffs even though it's a completely different mechanism.
MBCook 5 hours ago [-]
Are they? I heard a presentation from some pro-AI people on Friday to the large company I work at. They said they surveyed people at an AI conference and 93% of people were excited about it.
This was said with a straight face like “people love puppies!”.
No self awareness at all.
operatingthetan 5 hours ago [-]
In consulting firms and corporations you kind of have to pretend to be into it, it's just the culture.
zmmmmm 5 hours ago [-]
It is hazardous to swim against that tide currently from a career perspective - people rapidly categorise you as generically anti-AI even if you try to express a reasonable nuanced view. It's pretty toxic.
turpentine 3 hours ago [-]
Sometimes an employer will tell you what your view on AI is too, and make you sign an agreement.
kolja005 5 hours ago [-]
Think about what the implication here is for people who answered no to that question. If I were to go up to my boss and say "I'm not interested using AI because I think it's bad for society" I would essentially be saying that I'm not interested in becoming more productive and thus making more money for the company. That's a very poor reputation to be carrying around and most people are going to avoid it. I believe that this, more than any specific actions by AI companies, has contributed to the sense of inevitability that this technology is taking over whether we want it or not.
rtdq 5 hours ago [-]
Ask anyone who is a gamer what they think of AI. I guarantee you'll get a universally negative reaction because of RAMageddon.
throwatdem12311 4 hours ago [-]
Not just that, they go absolutely fucking ballistic if they in so much as find a single AI generated texture in a game.
rolph 5 hours ago [-]
it seems to be a case of nonrepresentative sample bias
MBCook 5 hours ago [-]
Obviously. But they’re using it as “evidence” that goes their confirmation bias.
Meanwhile I saw some survey where only something like a third of Gen Z and lower are pro-AI.
Of course the survey also said like 70%+ of them still used it.
Ekaros 5 hours ago [-]
I wonder what result you would get if you run survey about do people love dogs at dog show...
Also, looking at current market situation how many people would be willing to say to their bosses or even publicly that they think AI is quite a lot of bullshit.
pepperoni_pizza 5 hours ago [-]
Exactly.
My new favorite game at work is "guess if this person is really into AI or they just have to be because their boss is and if they weren't they would get replaced by someone who is" and it's quite hard to say.
And since the "boss" of CEOs are the investors in the stock market, and the stock market is automated to ridiculous degree, is this AI pushing for itself?
jrflowers 5 hours ago [-]
> they surveyed people at an AI conference
You can tell that everyone loves chain buffet restaurants by going to Golden Corral and asking everybody if they are enjoying their meals
The_Blade 5 hours ago [-]
yes, the Kelvin Benjamin agent
sodapopcan 5 hours ago [-]
Oh, I love puppies! There's another data point for them.
deepsquirrelnet 5 hours ago [-]
> In a provocative GitHub post, machine-learning engineer Han-Chung Lee argued that even rosy internal numbers that do show AI-assisted productivity gains are suspect, as they’re produced to hit adoption targets no one can effectively audit.
Isn't this fundamentally what MBAs do with their time? Keep going with this analysis, because it goes much deeper... In my experience, BI is often a house of cards. A lot of times it's just narrative crafting, just like we're all encouraged to do when we write our resumes.
Can you embellish a story? Can you invent a convincing political narrative? As far as I can tell, that's the fundamental unit of US corporation.
nayroclade 5 hours ago [-]
Bear in mind, in the same survey this article is talking about, nothing and nobody had an overall positive rating amongst those polled. So yeah, AI is unpopular, but it's just one more thing that people hate amongst a broader cultural movement of generalised hate.
Tyrubias 5 hours ago [-]
What you term “a broader cultural movement of generalised hate” is just a reflection of people’s dissatisfaction and fear regarding the state of the world. They’re seeing wages stagnate and prices go up. They hear news about how well the stock market is doing, but they don’t see any of those benefits. They see their politicians spend money on war and destruction but refuse to spend money on social programs. At the same time, the rise of the Internet paradoxically makes it both easier and harder for people to question the narratives they’ve been taught. Amidst all this confusion and worry, is there any wonder people are dissatisfied and looking for someone or something to blame?
alexjplant 3 hours ago [-]
Did people act like this in the 70s too when we had stagflation, mass unemployment, gas rationing, Vietnam, Nixon, etc. to contend with? I ask that sincerely because I wasn't around then. The US got a ton of cool music and cinema during that decade (disco and soft rock excepted) but the rest of it sounds even worse than things are now.
SpicyLemonZest 3 hours ago [-]
People acted like this constantly, the 70s are incredibly sanitized in the popular imagination. A government strike force murdered 4 student protesters at Kent State in 1970; Boston had 40 race riots between 1974 and 1976.
pibaker 3 hours ago [-]
It should not take more than one brain cell to realize that in an era when employment is already perceived as precarious, you are not going to earn any public favors by telling people you are taking their jobs and making them obsolete. Doubly not so when you offer no alternative path towards building personal wealth. Triply not so when you address none of the economic problems people face like housing or healthcare costs but make others, like social cohesion and energy prices worse.
If the industry continue to gleefully ignore public discontent over AI impact on society, I imagine what might happen is a public backlash that would make the post Chernobyl anti nuclear sentiment look tame.
Kiro 5 hours ago [-]
> Even within tech and coding, one of the areas where AI is reported to have the most promise, there’s the question of whether the productivity gains reported can be trusted.
I wish articles like this would at least acknowledge the massive adoption AI has among programmers. It's not comparable to stuff like helping you write the occasional email, which I presume is the baseline for most people outside tech. Making it sound like a minor tool that some people are still just experimenting with completely misses the impact it has already had on software development.
happytoexplain 4 hours ago [-]
The impact in software has been very hard to measure. There are so many ups and downs and variables.
Adoption in particular is a useless metric. They are forced to adopt even if it's not really helping in their case, or if it does help but using it makes them miserable, like being forced to switch jobs from something you enjoy to something you find boring and tedious. And then there's the "expertise debt" that will have who knows what impact in the coming decades.
fnoef 5 hours ago [-]
Many of these developers adopted the tools against their will, as means to bring home salary while they still can. In the mean time, the AI folks are working hard to just eliminate their job.
jhack 5 hours ago [-]
To a lot of people AI is just image and text generation. And yes, these uses alone aren't worth the time, money, and energy.
But there are a lot of areas where AI is helping that people don't see, like in medicine. Drug development, cancer research and early detection, CT and MRI analysis, just to name a few. These uses cases are vastly more important but rarely get discussed. It's important to know that AI isn't this one singular thing or else we risk throwing the baby out with the bathwater.
happytoexplain 4 hours ago [-]
They do see those use cases. It's not surprising that they focus on the enormous number of other, negative use cases. It's misleading to describe the medical use cases as "more important" - yes, they are, in the same way that healing a person is "more important" than ruining their lives. That's not what you're implying by your usage of the term, though.
A person having a negative attitude about AI doesn't mean that they wouldn't keep the parts that are mostly positive if they could.
5 hours ago [-]
fnoef 5 hours ago [-]
You know, perspective matters. When you sell a knife with the promise of a tool that helps you cut onions, is a completely different story from when you market it as a weapon to kill your neighbor.
AI is massively marketed by AI people as a tool to replace your job. So either the AI people are bad at marketing or the gains in other industry are insignificant/ do not generate shareholder value.
oldmanhorton 4 hours ago [-]
“Think of the children!”
When AI produces those meaningful advances in those fields, great, we can start having meaningful discussions about them. The greatest medical advancement of the 21st century is likely mRNA, or maybe GLP-1 for some. Neither were LLM assisted in any meaningful way as far as I know (they predate ChatGPT, perhaps more primitive models were involved in ways I’m not familiar with). Until those advances come, this argument is fanfic.
Plus, in the most morbid way possible: who gives a shit about living longer if they are stripped of their career, are inundated with slop at every angle, and can’t trust any information. These are real problems that AI has already created, unlike the fanfic of ridding cancer.
mark_l_watson 3 hours ago [-]
So much of the public hates AI, at least the non-tech people I talk with. Good to see so much common sense among the general public.
While I find a Gemini Ultra subscription worthwhile for myself, most of the value is in the fun and entertainment of interacting with a strong API in AntiGravity (usually use Claude models), Gemini App, NotebookLM, etc. It is intellectually interesting and fun.
Can I justify the cost to society for data centers, possibility of US government bailing out the AI tech giants, etc.?
No I can't. I think the Chinese are skunking us. Building cheaper AI is the winning strategy. GLM-5.1 and Deepseek v4 are amazingly effective for much lower inference costs.
teej 5 hours ago [-]
ChatGPT has a billion users so surely not all of the public hates it
beej71 51 minutes ago [-]
I use LLMs and I think they should be nuked from orbit. [Speaking figuratively, NSA.]
Helpful, sure. Would humanity be better off without generative AI? Definitely.
happytoexplain 2 hours ago [-]
Usage != positive feelings.
oldmanhorton 4 hours ago [-]
I feel like this can be explained in part by “I like using it for myself, but I hate when others use it.”
When you use ChatGPT for yourself, you may have a sense that what you see is made up; when someone else that you trust uses it and pronounces the output in a way that suggests it is their own, you are left doing much more complex social math to figure out if your trust in this person or entity can hold. It gets exhausting, personally.
tamimio 4 hours ago [-]
That’s not really an accurate measure, assuming it’s true, but I for one have an openAI account but I never used it, like not once, and you can imagine a lot of people too are the same, rest maybe casually, others only on free tier.
bjacobel 4 hours ago [-]
> ChatGPT has a billion users
And their company's leadership is famous for compulsively lying. Pardon me if I suspect they might be arriving at that number using creative math.
periodjet 2 hours ago [-]
It really isn’t, and we by and large don’t. The New Republic seems to be falsely equating “active Mastodon posters” with “the public”; it’s just not true outside of some very specific and insular bubbles.
amelius 5 hours ago [-]
Them: look how cool we are, stealing your data and making everybody redundant.
The people: ??
Investors: Tell us more.
layer8 4 hours ago [-]
Plus flooding the world with slop and raising hardware prices. Otherwise accurate.
autaut 5 hours ago [-]
I don’t think the public hates ai. I think AI needs a lot of money so it loudly only pursued the light-bendingly rich by leveraging the only two emotions they have:
1) greed: you will be able to fire all your employees
2) fear: if you don’t buy it someone else will and that is too dangerous for you
Of course normal people found this incredibly off putting.
nunez 3 hours ago [-]
It's unbelievable and frustrating that _we basically built our own demise._
We built the most meritocratic and accessible career path possible. If you knew how to code, and you invested in your craft (or didn't!), you were more-or-less guaranteed multiple amazing, well-paying career paths anywhere in the world.
Yet, a cohort of us decided "what if we built this thing that literally does our job? what could possibly go wrong?"
Yeah, this is gatekeeping, but the medical and legal industries have perfected that, and our industry doesn't even require advanced degrees to climb the ladder! (John Ternus only has a B.Eng in MEng!)
Why did we Eric-Andre-meme ourselves?
bwhiting2356 5 hours ago [-]
This is a problem of misaligned incentives that echoes other waves of new technology. The arrival of the washing machine was not resisted, because it directly benefited people who could now move up to higher value and less difficult work. AI doesn't seem to be playing out that way.
zozbot234 40 minutes ago [-]
> AI doesn't seem to be playing out that way.
Because it's even less useful than a washing machine. Unless you trust a frickin' humanoid robot doing your house chores, which is batshit insane as things stand.
KaiserPro 5 hours ago [-]
I think there is conflation here.
Data centres popping up near you probably means higher electricity prices, poor air quality and water problems
Sam Altman is a massive penis, with a gift for saying the wrong thing at the wrong time.
The two things that link them are "rich" people imposing their will on everyone else, publicly.
SJMG 5 hours ago [-]
I think the truth is in fact asymmetric on this front.
People, esp. many SWEs, like generating with AI, or more telling, wouldn't want to give it up in their work.
On the other hand, people generally hate consuming the product of gen AI.
Consumer experience = mostly negative
Producer experience = mostly positive
happytoexplain 4 hours ago [-]
I disagree. The producer experience is mostly positively impacted in a very visible way: Output. Saying their personal experience has been mostly positive, on average across all of them, is probably wrong.
SJMG 3 hours ago [-]
Hmm, what are you disagreeing with here?
fnoef 5 hours ago [-]
Gee I wonder why? Could it be because they promised to improve our lives but instead we are losing our jobs? Or maybe because there is insane shortage of electronics for the sake of AI data center? No, I think it should be the fact that this tech consumes more power than an average city. Actually, it must the fact that we have autonomous killing drones now. Or maybe it’s the misinformation slop? Nah, it should be the mass stealing of intellectual property.
I’m honestly baffled. What’s there not to like?
raffael_de 3 hours ago [-]
most readers seem to conclude that the ai industry should have done better marketing when the truth is that they believed they wouldn't even have to consider the public opinion because of how powerful their technology is.
porcoda 5 hours ago [-]
Not really surprising. I would guess this goes beyond just the AI and jobs issue. Your average person sees AI all over the place in contexts they didn’t ask for it but can’t escape. Social media is covered with AI garbage (e.g., AI generated videos). Podcasts are being flooded with AI garbage that are pretty overt grabs for ad impressions where quality is … not important. Appliances and consumer devices are getting AI that nobody asked for. And of course, our world of tech stuff where the selling point is more or less leaning hard into FOMO (“Everybody’s doing it - don’t you want to be an 100x developer and not get left behind?”).
It’s easy to fixate on the OpenAI and Anthropic-level companies, but the real inescapable flood of AI garbage is coming from the downstream companies building on the core AI providers. Communities like HN have some role to play here. Maybe some peer pressure on AI founders to, maybe, not make the world a worse place?
james_marks 4 hours ago [-]
Yes. If my only AI experiences were the ones you list, along with Google’s search summary, and the modern Clippy, I’d hate it too.
My wife was shocked to learn how much she liked Claude after these forced experiences with AI.
nickvec 5 hours ago [-]
"Naturally, violence is never an answer, nor is it a politically effective tactic."
I am not condoning violence, but claiming it is not a politically effective tactic is disingenuous. I get that columnists are trying to cover their asses, but still.
ares623 4 hours ago [-]
Violence is only allowed in one direction.
krapp 4 hours ago [-]
It's only allowed in one direction, but it's effective in many.
Violence is the reason slavery ended in the US. Violence brought us civil rights laws. Gay rights. Women's rights. Labor laws. Environmental protection laws.
Every right granted by default to white Christian gentlemen at the founding of this great nation had to be taken in blood by everyone else. That's just how America is. It cannot be trusted to live up to its own standards except at gunpoint.
When, where and how violence is justifiable is a different question, of course. But the premise that "Naturally, violence is never an answer, nor is it a politically effective tactic" is simply false. If violence were politically ineffective, authoritarian states wouldn't use so much of it.
badc0ffee 2 hours ago [-]
Half of those things were not brought about by violence. Labor laws? Absolutely. Gay rights, maybe? Gay marriage was famously won non-violently by showing the wider public that gay is normal.
What violence brought about women's rights or environmental protection laws? I suppose protestors destroyed the fur market.
Devasta 5 hours ago [-]
It has destroyed art, it has destroyed public trust with fabricated videos, it has caused skyrocketing prices in components so stuff like Valves console cannot get made, and its enriched freaks like Sam Altman.
The fact that AI acolytes are positively giddy about the above is just icing on the cake.
lmaoguy 4 hours ago [-]
“Naturally, violence is never an answer, nor is it a politically effective tactic. But you also cannot ignore how the tone-deaf public messaging of the AI industry has helped to contribute to this reaction.”
And yet, as the will of the people is ignored to the benefit of but few, violence will become the answer.
ares623 4 hours ago [-]
Never mind the fact that this article exists _because_ of the violence.
Legend2440 5 hours ago [-]
I am very concerned by the rise of political violence in the US, and I especially don't like how much support it gets on social media. Burning down a warehouse or shooting a politician does not make you a hero.
gpt5 5 hours ago [-]
Political polarization create tribalism, where people align their view with their tribe, and justify an increasingly more escalatory means to fight the "other side".
contingencies 5 hours ago [-]
Other potential macro-contributing factors may include: breakdown in local community, removal of community forums for discussion, attention economy and tabloid journalism gravitating toward emotional reaction (TikTok) rather than intellectual dialogue (balanced journalism), social media echo chambers, removal of accessible popular education, defunding of public media, unaffordable public access to medicine, credit culture, increasingly unaffordable costs of living and abnormally performative political dioramas. The net result are people, unable to reason about the world around them, drawn in to emotional us-and-them with a dialogue of echo-chamber reinforcement, who decide semi-rationally to "chuck it all in" the second things get out of control financially, psychologically or emotionally. In other words, the modern world has built a perfect breeding ground for recruitment to extremism. <s>Great time to start a cult.</s>
... and in a classic example, apparently the mere mention of concern regarding the rise in US political violence got this thread flagged. Where can you have a discussion anymore?
Legend2440 5 hours ago [-]
It got flagged because the people who are pro-violence flag any comments that disagree with them, so they get hidden.
contingencies 5 hours ago [-]
Fair theory but how do you know that?
MBCook 5 hours ago [-]
I’d say such things are very rare when people feel in control and have a voice in how their life goes. We didn’t see it for decades in the US.
harmonic18374 5 hours ago [-]
In my book, you are a hero if you sacrifice your own well-being for the utilitarian good of the public.
Many people here would call Putin's assassin a hero, the important distinguishing factor is whether it's a clear societal good or bad. If it's unclear then it's assumed bad.
I am not disagreeing with you here. But platitudes do nothing to convince people. You need to actually explain why the world is a better place with X politician in it, because it does actually matter.
Legend2440 5 hours ago [-]
Violence isn't going to give you the quick answer you think it will.
Once you start shooting, everyone starts shooting. Bystanders get hit. Companies start defending their businesses with private armies. The economy collapses. We all lose.
Countries high in political violence are the worst places in the world to live.
bluefirebrand 55 minutes ago [-]
People who are desperate will be relatively happy with "we all lose" instead of "a few people win and everyone else loses"
teaearlgraycold 5 hours ago [-]
Especially consider how many fellow workers Paper Mario could have killed with his arson. But smart people tend to realize they can do more with their lives by not being violent.
GOD_Over_Djinn 5 hours ago [-]
I think it’s interesting that you choose to focus on this part of the situation. To me, it’s far more relevant that the general public has little, if any, recourse through legal means such as voting. This is what makes political violence inevitable, and some would say, fully justified.
Legend2440 5 hours ago [-]
This is just not true though; politicians in the US are highly receptive to voter demands.
It's just that most voters don't agree with you.
bakugo 5 hours ago [-]
What's the alternative? You think calmly asking those politicians not to sell you out to the trillion dollar corporation that wants to build a datacenter in your backyard is ever going to work? Be real.
History has repeatedly taught us that violence is usually the answer. I wish it didn't have to be this way, but it is what it is.
Westerners tend to get fulfillment from "doing things" and the feeling of individualism, self-control, and self-determination. Sometimes we use the term "control freak". Some other societies tend to be much happier with automation, without worrying so much about the less fine-grained control, how trustworthy it is, the quality of results, etc.
This is hugely generalized and a little offensive, but there is definitely a core difference that could be more thoroughly described.
morkalork 3 hours ago [-]
Post scarcity or death. We're about to face off with the great filter in the next few decades. Buckle up!
giancarlostoro 5 hours ago [-]
Its really unwarranted some peoples reactions to someone using AI, and in other cases theres no AI used and people blindly assume something is AI and then proceed to write it off as slop, but it wasnt even AI it was just slightly low quality authentic content that you maybe would not have even commented about, we all have seen low quality videos on youtube we didnt freak out about.
fidotron 5 hours ago [-]
The group that really hate AI are the media and journalists, which makes perfect sense given what generative AI is doing to those industries.
As it stands though the whole "the public hates AI" is about as credible as that phase from a decade ago where random tweets were used to justify any position they wanted to.
marstall 4 hours ago [-]
TNR - DNR
devindotcom 5 hours ago [-]
certainly people are finding everyday uses... but a lot of those uses are necessitated by enshittification of search and other commonplace tools. so although I think many see the usefulness of the technology here and there, their experience of it is one of being forced to adopt a thing they never asked for by companies with few or no sincere or articulable values.
billions use windows and gmail but have a poor opinion of microsoft and google both for obvious reasons. I expect the same will be true of AI platforms and the usual suspects behind them.
ori_b 5 hours ago [-]
We keep setting houses on fire so that we can roast marshmallows, and wonder why the people living in those houses are unhappy.
As we do this, we promise that if we set enough houses on fire, we'll build hell. And imagine how rich we'll be if we sell fuel to keep the hell we built running.
ares623 4 hours ago [-]
All this, so people like us can do our jobs just a little bit easier, which wasn't that hard to begin with, and in fact was quite comfortable all things considered, for employers who are promising to lay us off, for productivity gains that aren't even measurable.
Think back on a time where you and a teammate (or teammates) spent hours or days debating back and forth on different technological or architectural options for trade-offs. How much nuance and detail went into those discussion. We used to take pride in our ability to make careful and measured tradeoffs. And yet with this tech all that is thrown out the window.
luisgvv 5 hours ago [-]
Tbh I don't care if vibe coding or generated art is produced as I think general public will eventually accept them or decide when it's worth to use/consume those kind of products.
What I really hate is agentic customer support, sales etc. - when you have to use them you realize how stupid the workflows, tool call, MCP, and all that garbage that is glued is just to reduce costs instead of churn.
PS: Ironically I'm working on coding an "agentic platform" for the product suite and their backend services. I simply don't feel confident about the product I'm building but I guess it is paying my bills for the moment
DocTomoe 5 hours ago [-]
Some perspective ... I really do not see 'the public hating AI' outside of a very specific demographic (17-30 year old artsy types, generally left-leaning). Average everyday people in my area either don't care about AI at all, or like it, using it as a better search engine.
The situation might be different in the States, but I'd wager Joe Sixpack, brass fisher in Montana, couldn't care less about GPT-5.5 or whatever Musk is up to these days.
I don’t think Montana fishermen have a broad impact on society, or its decision making. There’s just not that many of them.
jazzyjackson 4 hours ago [-]
On the contrary I would bet people who enjoy fly-fishing in Montana are over represented in Congress.
mannanj 5 hours ago [-]
Aren't these types of incidents only expected to rise, as inequality and economic challenges are on the rise, what are hungry, bored, lonely and neglected people going to do? This isn't a surprise - we have neglected American health, wellbeing and happiness and then are telling them "AI is going to come trust us its different this time" and yet for most people their lives get worse as AI company shareholders/employees lives' get better.
I'm ashamed that we don't care more about human dignity. I care about human dignity and wonder if I'm an outlier? Even a tiny pledge and affirmation "Hey, we see you, we are working to bring relief and guaranteed dignity to your lives by doing xyz" would help. Instead when I ask for peace in war[edit: and basic income, anything that is an essential part of dignity[edit 2: and I hear its not possible right now while that isn't said of AI investments] I hear unaccountable leadership dodging the responsibility [of their constituents] and accelerating conflict while their friends' pockets get thicker.
contingencies 5 hours ago [-]
Yes. Other potential macro-contributing factors may include: breakdown in local community, removal of community forums for discussion, attention economy and tabloid journalism gravitating toward emotional reaction (TikTok) rather than intellectual dialogue (balanced journalism), social media echo chambers, removal of accessible popular education, defunding of public media, unaffordable public access to medicine, credit culture, increasingly unaffordable costs of living and abnormally performative political dioramas. The net result are people, unable to reason about the world around them, drawn in to emotional us-and-them with a dialogue of echo-chamber reinforcement, who decide semi-rationally to "chuck it all in" the second things get out of control financially, psychologically or emotionally. In other words, the modern world has built a perfect breeding ground for recruitment to extremism. <s>Great time to start a cult.</s>
balamatom 5 hours ago [-]
The AI loves it, though!
garganzol 5 hours ago [-]
AI is precise. People are not. AI calculates things. People manipulate them to their benefit. AI precision demands people's accountability. Some people feel threatened by that, fearing that their shady games will no longer be working. Deceivers cannot stand seeing their own reflection in the mirror, so they project their own pathological traits onto it. The whole thing turns into aggression. Psychology 101, inspired by the works of Carl Jung.
BrenBarn 3 hours ago [-]
> If Altman, Amodei, and their Big Tech peers want to rebuild public trust and create a genuine technology that benefits the public, then the path forward isn’t another white paper or postulating about the existential risks of their technology. It’s sustained, verifiable action: genuine transparency about what their products can do, a willingness to accept meaningful regulation and responsibility even at financial cost, and real democratic input from communities on the growth of data centers.
They need to accept far more than that. They need to accept that they may not be able to "create a genuine technology that benefits the public" at all, and that they therefore may be required to stop completely and totally dissolve all their operations if it turns out that is what is best.
masijo 5 hours ago [-]
AI has automated my favorite part of the job: coding.
Gone is all the experience in clean code, good idioms, etc. All replaced by easily generated shitty code that can be removed and generated again as we please, until it works. No thought about the quality of code itself. Some companies are straight up forcing programmers to live in Claude Code and never even see the code, just write the spec.
It’s disgusting. And the worst part is that you can’t opt-out. If you give even the slightest hint that you don’t like AI you’re seen as a Luddite and you’ll be put next in line for the upcoming layoff.
zmmmmm 4 hours ago [-]
I think you do a good job capturing an actual microcosm of the real problems here at an emotional level - why people "hate" it.
(a) loss of fulfillment (b) lower quality of output and nobody will care so the world will just "degrade" and (c) a perceived lack of autonomy ("forcing", "you can't opt out") around how adoption itself is executed
gavmor 4 hours ago [-]
This is surprising to me because I found that I am able to invest even more of my time in considering the good abstractions and idioms that I want to employ for a particular problem, and now most of my day is spent in discussing patterns and architecture rather than what brackets I have left to close.
Although, full disclosure: I have quibbled with Gemini quite a bit over the trailing comma, which clutters the diff, and buries the lede at code review.
But it's been very gratifying to refer to modules entirely by their role in a given design pattern (eg "driven adapter") and be understood. To define the idiom, and see it adhered to.
But am I operating still at too low a level? Would I be penalized, at these "some companies" for not producing shitty code?
Ah, but in my particularly forward-deployed line, there's always an element of showmanship compelling me to write demonstrable code.
But, also, how can I specify the behavior if I can't name the component? Is it really possible to "vibe" code à sophisticated piece of software entirely from the user's domain terminology? Without any intermediate abstractions in mind? Inconceivable, frankly. There are invisible walls, invisible shapes beneath the surface.
Then again, I'm young enough to have never allocated memory manually in my professional life.
happytoexplain 4 hours ago [-]
A programmer thinks and directly engages ("works with their hands") at many abstraction layers at once. You must admit the scope of this has been dramatically reduced. You may personally love one of those abstraction layers so much that you can be happy without the others, and without even the hands-on half of the one layer you love. Many people aren't so lucky.
bwhiting2356 5 hours ago [-]
Just because tractors are here doesn't mean you can't garden as a hobby. I'm also tired of the slop, but this is a culture and management problem. Every software job I've had there was tension between speed and maintainability.
happytoexplain 4 hours ago [-]
Many (most?) adults do not have time to write an appreciable amount of software outside their jobs. Further, this doesn't address the enormous impact that losing the option of doing something you find engaging and don't hate for a job has on mental wellbeing, life satisfaction, just being happy at all, etc.
bwhiting2356 4 hours ago [-]
We can have 2 out of 3:
* artisanal, handmade products
* affordable products, not just for the rich
* well-paid workers
This was true of clothing, agriculture, and will also be true of SaaS. I choose affordable products and well-paid workers, but that requires embracing automation.
ori_b 2 hours ago [-]
Are you claiming that software is unaffordable? I get the sense that it was so cheap people were unhappy with how much was shoved into places where it was unwanted.
rexpop 3 hours ago [-]
Wow, this comments exhibits a stunningly anemic appreciation for human dignity and self-determination.
rvz 4 hours ago [-]
> It’s disgusting. And the worst part is that you can’t opt-out. If you give even the slightest hint that you don’t like AI you’re seen as a Luddite and you’ll be put next in line for the upcoming layoff.
So we found something much worse than crypto.
You can opt-out of crypto, but you cannot opt-out of AI and have no choice but to participate.
forgetfreeman 5 hours ago [-]
"Naturally, violence is never an answer, nor is it a politically effective tactic." Abhorrent under normal circumstances certainly but declaring the primary drivers of both the workers rights and civil rights movements ineffective is laughable. Power cedes nothing without violence.
dominotw 4 hours ago [-]
1. constant scare mongering of job "bloodbath" . amedio is the worst culprit by far.
2. flooding social media with obviously fake ai content
3. only billionaires benefiting from it and gloating about it .
franktankbank 4 hours ago [-]
I mean come on. The public hates whatever face the vc puts on.
4 hours ago [-]
AndrewKemendo 5 hours ago [-]
Welcome to the end of another AI hype cycle
Anyone who was in AI before 2022 can tell you about the last cycle that went from 2012-2018 or so when the metaverse failed, but we got tensorflow, pytorch, gpgpus
The cool thing is that every hype cycle generates a lot of really good new AI tech and integrations that persist. This time we got GPTs and diffusion sand splatting
I think this previous cycle will be seen as the penultimate with the next one permanently improving with no scale back.
We’ll be fine. We have survived every winter
bwhiting2356 5 hours ago [-]
This is the first time the general public has cared about any AI industry cycle.
AndrewKemendo 4 hours ago [-]
Totally agree
1vuio0pswjnm7 3 hours ago [-]
[dead]
aifactory5 5 hours ago [-]
[dead]
CoherenceDaddy 5 hours ago [-]
[dead]
mschuster91 5 hours ago [-]
[dead]
navvyeanand 5 hours ago [-]
[flagged]
tokioyoyo 5 hours ago [-]
Can’t blame the people when it’s being sold as a utility to “reduce the workforce”, and no government is willing to prepare/plan for it.
operatingthetan 5 hours ago [-]
I'm very into AI but am generally neutral on it. Get real, every other question I ask a frontier model has some random nonsense included that is essentially a lie. These are chaos machines.
greggoB 5 hours ago [-]
Mobilized by who, and how was that achieved?
poly2it 5 hours ago [-]
Have you been on Reddit recently? Anti-AI sentiment isn't limited to technical communities, but engaged with platform-wide. The hate is the content; apart from the rampant anti-intellectualism there is very little engagement with any material discussing the technologies in and of themselves.
greggoB 5 hours ago [-]
I don't think you know what "mobilized" means - at least, it implies an actor with ulterior motives is driving this response in people. Usually political actors or the media are accused of this.
What you're describing on Reddit sounds like a broad-based antipathy to AI, which is just... how a lot of people are feeling?
You can criticise their motivation being based in emotions or vibes instead of facts and thoughts, but unless you have evidence to the contrary, it sounds like this is just where people are at on this topic.
poly2it 4 hours ago [-]
Of course I know what mobilised means, but with these social medias, the platforms themselves are at play in shaping the conversations. To clarify, I think this type of anti-AI sentiment should be studied as part of a the baseline social culture of the platforms. I do not have a good answer to why it flourishes on them, other than the generic framing. Perhaps its a new type of technological conservatism?
anematode 5 hours ago [-]
The mobilizers, in this case, are almost entirely the people leading and deploying AI rather than any anti-AI agitators.
When someone hears from these leaders that there will be a white-collar "bloodbath", then sees enshittification in their daily lives from misapplication of the tech, can no longer trust any newly published photo, etc., it's the most rational response.
5 hours ago [-]
fluoridation 5 hours ago [-]
Don't forget other indirect economic effects, like the RAMpocalypse.
Hamuko 5 hours ago [-]
[dead]
atomicnumber3 5 hours ago [-]
I agree, the hate against vaccines makes me sad.
Legend2440 5 hours ago [-]
GMOs too. Completely unjustified backlash for a technology with incredible potential.
tanduv 5 hours ago [-]
Fuck Monsanto and their patents on seeds
operatingthetan 5 hours ago [-]
The main selling pitch for vaccines and GMOs was not "this will take your livelihood!"
nobodyandproud 5 hours ago [-]
Don’t even.
Unlike vaccines, the patent misbehavior by Monsanto says otherwise about GMO.
JoshTriplett 5 hours ago [-]
That's not an issue with GMOs. That's an issue with patents and attempts to restrict replanting.
nobodyandproud 5 hours ago [-]
That’s basically the face of GMOs, so it is an issue for GMOs. GMOs for whatever reason have a terrible ambassador and I haven’t seen evidence to the contrary.
For vaccines, a good portion of the population remember vaccines being developed and marketed to help people. Then there are immigrants that remember more recently how life changing vaccines are.
Ekaros 5 hours ago [-]
Nuclear energy. Could have so much less dependency on other sources.
And then nuclear weapons. World would be so much safer if every country had sufficient arsenal of them.
jmclnx 5 hours ago [-]
All people see is job losses and increased costs based upon articles and tweet type things. If due to AI or not, that is what they are seeing. It is like MAGA news, all they see "AI eliminates jobs and AI increases your electric bill".
Nothing at this point will make people believe AI is good for the masses.
What will need to happen for people to like AI ? I say they will get real $ month after month to cover more than the inflation, not the dumb tax deductions Trump harps on. In this case, maybe 1,000 USD per month adjusted for inflation yearly from AI will end this trend.
Why a payment ? All they see is the wealth of the top 1% increasing almost exponentially where they are struggling to pay their 'fixed' expenses.
In reality since 2008, the rich has been cashing in while workers have been footing the bill. That is the big issue.
clipsy 5 hours ago [-]
Since the 1970’s, actually.
tamimio 4 hours ago [-]
More of 1694, to be honest.
tamimio 4 hours ago [-]
I did mention that few days ago in here and it seems HN was in denial, but the reality is, most people don’t like it. Sure, they might use it, but that’s just because they are after some shortcuts, but they all have a negative sentiment towards it, and used as a term to discredit something, as opposed to making it better, “oh look, that’s AI, yeah whatever”, “yeah AI, fake and g..” I heard it many times online or offline.
The only people who still look positively at AI, are either the ones working on it/building something with it, or the ones who are profiting from it, kinda like crypto few years ago, and just like how crypto is mostly immediately associated with scams now, I imagine something similar will be associated with AI soon.
Even other tech people that are not directly in the AI industry hate AI, due to all the shortages in chips and prices increasing across the hardware board, from gamers to sysadmins to hobbyists, I mean, the rpi are almost like a fully fledged NUC few years ago.
Edit: to add, did AI improved the average person life? Nope, if not increasing the costs, or tracking and violating their privacy, it did flood the internet with slop, or a frustrating useless AI chat support.. from an average person perspective, it added none to their quality of life, it didn’t make things cheaper, it didn’t improve their travels, it didn’t magically made them teleport, and so on, instead, AI was used for all hostile purposes against average person. Even from technical perspective, have we seen any breakthrough in tech given AI is a “superior” assistant? Nope, software is more shitty and buggy now, and SaaS are even increasing the prices (probably to pay for AI tokens), software developers are saying coding isn’t fun anymore, hardware designs didn’t improve, governments processes still have the beuqacratic system plus AI. Unlike when automation was introduced decades ago, where people did notice an improvement in their quality of life.
keybored 5 hours ago [-]
Weird to have these threads and then fifteen minutes later there will be a 350+ comment, 500+ votes thread about some 200 USD/month AI subscription service which is now the I Have Seen The Light moment and My Beautiful Side Projects Are Finally Materializing.
This is creative destruction in a whole new sense. Just chugging through genuine (or human) creativity, then training on human prompting, then finally ascending near the cluster of Anthropic/AWS nuclear power plants. And people pay for the pleasure.
lpcvoid 5 hours ago [-]
Vibecoders are by now completely dependent on their subscriptions. Their skills are in the process of deterioration, they have no other choice. It's like heroin.
4 hours ago [-]
lioeters 4 hours ago [-]
Their thinking skills are atrophying in front of our eyes in a matter of months, years. "You better get on the good stuff or you'll left behind!"
oulipo2 5 hours ago [-]
For a good reason
very_good_man 5 hours ago [-]
The Journalism Industry Is Discovering That The Public Hates It
retired 5 hours ago [-]
Downsides of AI: Massive increase in RAM prices, housing crisis worsens as datacenters are build, massive energy usage, children are having trouble learning at school, spreading misinformation is easier than ever.
Upsides of AI: I can ask it if my farts are caused by the celery I ate earlier
badc0ffee 5 hours ago [-]
> housing crisis worsens as datacenters are build
I take your other points, but I can't see the connection there. I've heard that they increase electricity rates in many cases (poorly managed electric utilities that can't build out grid capacity without raising rates for everyone), but not that they're affecting housing.
retired 5 hours ago [-]
For The Netherlands, construction work causes emissions. There are limits to these emissions. Building a data center means you can't simultaneously build a house anywhere nearby the construction site as that would cause the local emissions to go over the set limits.
Next to that there is net congestion. The energy grid is currently critical, if you add a data center that means you will not be able to connect 20 to 30 newly build homes to power. There are currently new homes that are waiting for a connection to the grid before people can live there.
Space. In the densest country of Europe (non-microstate), a hyper scale data center could have been a neighborhood.
Latest point, maybe not the strongest, is construction workers. While construction workers building a data center are different from construction workers building homes, it doesn't really help with the labor shortages in construction if electricians are all busy building data centers.
JuniperMesos 5 hours ago [-]
> For The Netherlands, construction work causes emissions. There are limits to these emissions. Building a data center means you can't simultaneously build a house anywhere nearby the construction site as that would cause the local emissions to go over the set limits.
This is an insane regulation, and I wonder if it was passed by NIMBYs whose actual goal is to prevent the construction of housing near them.
retired 4 hours ago [-]
I recently read an article that 8 cows needed to be moved 5 kilometers so they could build a bicycle path. The cows combined with the construction of the new bicycle path caused too much local emissions.
The municipality bought the emissions rights from the farmers that held those 8 cows and the farmers then had to move/remove/slaughter 8 cows.
Welcome to The Netherlands.
badc0ffee 5 hours ago [-]
Thanks for the perspective. It sounds like you'd run into the same issues (except for the electrical load) building any large industrial project, which does not bode well for your economy.
retired 4 hours ago [-]
Correct. At the moment the expansion of ASML is halted because of nearby farmers that create a negligible addition to the GDP.
If the Dutch government was a bit smarter, they would buy out the farmers and create a mega-campus for ASML, including housing for all those expats.
Edit: I stand corrected, last month ASML was granted permission to expand by 20.000 employees.
vatsachak 5 hours ago [-]
I'm actually working on something that can replace AI in the second aspect.
I've found that LLMs don't give good advice regarding diet. They just agree with whatever your hunch is.
ChatGPT agreed with my hopeful self that I got diahrreah from VR sickness as opposed to my poor food handling, which it turned out to be
The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income.
The other two critiques are trickier. The first is the environmental impact of AI, and the response is difficult. Doing work to make it more efficient, and continuing to develop cleaner energy sources is paramount. Taxing and efficiency requirements might be a start. We have the technology to produce energy in sustainable ways, but it is expensive. It has to be non-negotiable if massive energy usage for AI is to continue.
The last is the REAL conversation, and I don’t know the answer. How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
I guess there is another issue, related to the last one, which is how do we deal with the ability to use AI to mislead and commit fraud at scale. How do we deal with not being able to trust what actually said/done by a human and what is AI pretending to be human? How do we avoid and mitigate the ability for AI to generate a massive amount of custom content that is used to mislead and defraud people? So much of our current mitigation strategy relies on the assumption that it takes a lot of effort and time to do certain things that can now be done instantly thousands of times?
People who bring up basic income need to get serious about the numbers involved because I never see it. It's not a realistic solution.
- employ you at 60k/yr
- replace you with a machine that costs a lot of money, and also send you UBI of 60k/yr
It should be obvious the latter is not an option that is ever going to happen.
Unless we are all to become serfs, a new way to distribute resources needs to be on the table.
UBI is a salve, offered to keep victims of the system out of abject poverty. It is too little, too late.
The question that always pops up for me when it comes to UBI applied to the current capitalist system: even if you did actually come up with the money somehow (which is a pretty huge if as you say), once everyone has X “base money” per month, doesn’t that mean the cost of living (specifically renting) will rise to match this new “base”?
Another factor to consider is that putting more money in the hands of people in need of <thing> means producing <thing> becomes more profitable and thus more investment and resources are directed towards <thing>. If we assume the economy works the way the proponents of capitalism say it does, this should eventually drive the cost of living back down.
But personally I think the biggest benefit of UBI would be the reduction in number of people who are desperate enough to accept work – both legal and illegal – that is unfairly compensated, inhumane and/or immoral. The existence of that class of people is the driving force behind many societal problems. Exorbitant amounts of resources are wasted treating the symptoms of those problems instead of fixing the root cause.
That 12k doesn't include healthcare, it doesn't include a lot of things. It's basically ensuring that people live well below poverty level, and for what? I just don't get how the numbers work, even if it was politically feasible.
I'd much rather have free healthcare and other amenities other countries have. Here in the US if you lose your job there is virtually nothing between you and the streets besides family and friends.
I'm facing this right now. I cannot get a job in tech which means restarting my career. Getting a job right now is not easy in any field especially not in anything like a living wage. If I did not have my parents I would be on the streets right now, thankfully I don't have a mortgage or anything like that. I'm not sure how much $12k per year would really help, it certainly wouldn't pay for housing.
It's rough out there.
For high levels of UBI it’s not possible to get all of the necessary tax revenue from taxing billionaires or corporations or other simplistic ideas that sound good unless you do math.
Almost definitionally it would. If society is saving a bunch of money on all that saved labor, that extra value is still there, it just needs to be appropriately redistributed
If we go back to a 60% corporate tax rate, for sure.
A 60% corporate tax rate wouldn’t get to the levels needed for UBI proposals either.
* A job guarantee like we had during the great depression
* Lowering retirement age
* Raise minimum wage
* Expanding medicare to everyone
It's worth remembering that if AI really can do everyone's jobs then it'll be wildly deflationary so there's no need to worry about pesky government spending on this stuff or paying people more. Spend spend spend, baby!
Ah youre worried it cant do that? Maybe it is mostly smoke and mirrors then.
So without AI, the path forward is obvious: those 3 will become worse. Lowering retirement age, raising minimum wage, and expanding medicare won't happen without AI. They can't.
We already are reasonably close to a job guarantee. If unemployed people would accept any job, unemployment would drop by a lot. Not to zero, obviously, but a lot. Unemployment is also pretty low by historical standards, so fixing unemployment with a job guarantee can't fix much. We'll need something else.
> It's worth remembering that if AI really can do everyone's jobs then it'll be hyperdeflationary so no need to worry about pesky government spending on this stuff.
So yeah, I disagree. If you're going to assume AI will just jump to how capable it'll be 100 years from now, then you need to think a bit deeper. What AI effectively does, it provides capital-based labor. You buy a robot. Robot costs a lot, but operational expenses are marginal, energy and (maybe) "tokens". Add solar power, and let's say local AI becomes a thing, at least for normal robots, and you need nothing other than the initial cost of the robot.
Okay, so this will mean everything can be staffed with tens of thousands of these robots. Remote mine? No problem. 500 robots in your house? Why not. Cleaning very large facilities? Not a problem. Farm hundreds of square kilometers? Fine. Dig a canal to avoid the strait of Hormuz and just do it with shovels? Let's get to it. AI can be a universal machine that can do anything labor can achieve.
Obviously AI will massively increase the output of the economy, and people will figure out what to do with that, as people will want a shitload of things done. Which means the problem you're identifying will be trivial to solve, and we'll figure something out.
Historically, that "we'll figure something out" has usually meant the economical wipeout of large parts of the population, sooner or later followed either by some epidemic event or other "act of god" (like fires) that was a consequence of squalor and poverty, or by some sort of war to thin out the herd.
I'd prefer if history would not repeat itself for once.
Uh, historically everything has usually meant the economical wipeout of large parts of the population. It still means that in most third world countries. Economic power is not the huge differentiator here.
The extra steps reduce costs and encourage offsetting production. Those are important steps!
^ this would be an accurate representation of your opinion then?
One could say the same thing about all the little art projects a hypothetical society on UBI might busy itself making. The pertinent difference seems to be one about scale and co-ordination. Job guarantees say we work together–through a centralised power–to build big things. Handing everyone cash leans more towards arts and crafts and consumption.
Creating busywork doesn't strike me as a particularly worthwhile endeavor, compared to idleness.
24k puts you near poverty level. $1k per month will cover food expenses, it won't cover transport, shelter, and certainly not medical. On 12k per year you have enough money for food and praying that an emergency doesn't happen. It's hard enough living on 40k, and I'm not even in a place where costs are expensive.
It is kinda funny to see you guys petrify at the thought of people living in poverty, pretend you care, and then use us as a political foil in your useless debates.
Where's the money you owe us?
Telling a bunch of people they should accept being poorer has always worked out historically.
I get where you're coming from. But this is politically unworkable, and for good reason. If AI increases productivity, that means more wealth, which means living standards should go up.
> I get where you're coming from.
You do? Have you priced out health insurance lately? I have. Insurance on HealthCare.gov for my partner and I would be $1700/month for what amounts to catastrophic coverage. It had around a $20k deductible and covered nothing other than an annual physical prior to hitting the deductible.
With $2k/month to work with between us, I guess we have to somehow find a place to live and eat on the remaining $300 as we pay for our functionally worthless health insurance since there is no way in hell we could afford to pay the deductible.
Many of us see the current US administration as being either real life modern nazis or heavily influenced by such.
So I was wondering; are you being serious?
The natural progression of this is always government price fixing, which always ends up in complete destruction of the economy.
$12k might be nice in parts of Asia, but when the average rent is $1200/month, it doesn't go very far anywhere in the US.
The pay levels are not comparable because you are also recompensed with time. You may choose to spend your time in a number of ways that you find rewarding that also reduce your expenses. Making your own meals, clothes, furniture, beer, wine etc. There are a lot of people who would enjoy doing these things but are too time poor to do so.
Your expenses also reduce by the amount you must spend in order to make yourself available to work. Travel, work clothes, medical certificates when sick. You can spend a lot in order to be paid.
If you want a world with a reasonable distribution of income levels. It stands to reason that those receiving more right now should receive less. Certainly, the absolute wealthiest should reduce the most, but on a global scale, it is hard to defend that those in the top 10% of incomes should retain their position.
The proposal for how much a universal income should pay is a variable to be argued itself. I can certainly see it being argued for at a lower level than ultimately desired since something is better than none.
In a sense the end state of a universal income in an equitable world would be remarkably simple. The income available divided by the world's population,
Those reviving more than their share now may not be happy about it, but I'm not sure they have a right to their larger portion either.
Every call for UBI should be qualified with two estimates:
1) How much money you think UBI will pay out
2) How much money you think the tax will generate
Creating a UBI program with AI taxes sounds like a clean solution to something until you do any math.
If we estimate today’s AI revenues across all the big providers at $100B annually (a little high) and divide by the population of the US, I get around $24 per month per person.
So a 100% tax on AI plans would allow us to give UBI of about 80 cents per day.
Even 10X the revenues wouldn’t make bring that to parity with UBI expectations. A 100% tax would also be an incredible gift to foreign AI companies that could offer similar services for half the price to everyone else in the world.
The work that is most replaceable by AI is work that is mostly digital. That work most easily moves to another country.
When the work is replaced by AI you can relocate it to another country much more easily than when you have to relocate workers.
And AI "Ikea-fies" art and creativity. It doesn't get rid of it. Of course you can get a generic table from IKEA, but for a real unique piece, you need to go to a real artist. Always.
The real main critique is for AI jobs that are a one-to-one replacement, your taxi driver, your dock worker etc. I don't think UBI is a viable solution (I used to) but nothing replaces the community and status that a real job gives you. This is going to be a tough one.
In the same way that it was straightforward to deal with job loss from the industrial revolution, or when the US shipped away all its manufacturing capability?
How much UBI you want from this AI tax ?
I don’t think they’d give me what I want
1. Lack of memory/continuity
2. Lack of agency
3. Lack of self-awareness
Based on my understanding of the basic 'loop' of an LLM, solutions for these may be decades off or not possible. Which leads me to the fourth problem:
4. Lack of compute
To get anywhere near AGI we need massive context windows. The whole thing is a mess.
I was talking to Claude and ChatGPT, trying to fix an issue with a simple function in Rust, which is returning a boolean depending on day of week and time of day. The logic looked ok to me, but tests were failing. Notably, my real world data derived tests were succeeding, while brute-force/comprehensive tests written by Claude were failing. I wanted those "just to be sure". Both Claude and ChatGPT were spinning their wheels, introducing fixes, then undoing prior fixes, so on and so forth. They also updated tests. We were going from one failure to another, while they confidently reassured me that "this is the fix", they found the "crucial bug" etc. etc.
Turned out my logic was correct from the beginning. My tests were correct. Claude's tests were broken. I realized this by writing my own brute force test. Just a simple loop with asserts and printlns to see what is failing. I did what the machine was supposed to do for me. In less than 5 minutes I fine tuned the test to actually check what it was supposed to be checking and voila. The "fast" thinking machine episode took me 2 hours and only produced frustration. Sorry I should learn to speak the language - AI reduced my development velocity :)
The only poverty I see coming is from collapse of quality after these dumb machines are used to replace people, who actually know what they are doing.
Have you not had a discussion with Opus where it insists it is correct about something it is objectively wrong about for several turns?
> Do you realize that "memory" requires eating your hilariously small context window?
I do! LLMs are structured differently than humans, so the component we call "memory" corresponds to what humans call "short-term memory"; practical long-term memory for an LLM looks much more like what a human would call "let me write this down". But you can and commercially available systems do load it into context on demand when it's needed for some problem or another.
The LLM only currently has the illusion of these things. Hence the bubble.
I know that you (or anyone) as a human being don't have the illusion of these things.
This is not like the car replacing the horse for transportation. The LLM as-is cannot fundamentally replace the person. They require the agency of a human to take turns at all, and even more so to enact change in the world.
Your LLM does not actively engage in the world because it does not experience anything. It only responds to queries. We can do a lot with that, but it's not intelligence. It can't say oh hey SpicyLemonZest, I was thinking and had an idea the other day. Because it has nothing between each query.
Making it more efficient will probably >>increase<< the total energy devoted to AI, not reduce it. See Jevon's Paradox.
I'm curious for metrics, but Dario strikes me as being less perpetually online. Given equal time, they may each be unlikeable. But they don't put themselves out there equally–Sam and Elon are unable to focus on their work. (I'll admit I've had a soft spot for Dario since he stood up to Hegseth–maybe I'm just not seeing the equal hate he's getting.)
Problem for jobs is that there are 200 countries and all the earnings will go to a few. Universal basic income for everyone? Or just the US?
Who gets to keep their house locations in a new fair world? The person whose parents bought in the right place 50 years ago? Who pays the money these models earn, if nobody clicks ads or does a job? What is income for if we don’t work and can just ask the AI for everything we want?
What happens when the super smart AI comes up with “better” (more fair, consistent, etc) answers than you think you have to questions like the above? What if they end up socialist? Do we force it (and invite risk it escapes and fights us for the greater good) or give in to the presumably more thorough reasoning?
Altman and friends' "stop us before we shoot grandma" PR tour in 2023 and '24 is largely the cause of this AI backlash. If you tell everyone you're building something that will kill us all, you will scare up investors. But you'll also turn the public against you. In truth, we have zero evidence of the alignment problem to date in the existential form. Instead, it's the usual technology enabling bad actors stuff.
That's massively moving the goalposts on what counts as "an existential problem." The original framing was not economic dislocation but actual existence, i.e. existential. This new framing is a retreat to a way-of-life argument.
And I'm still calling baloney! The "AI will kill us all" argument backfired on Altman et al, so now we have an "it'll take over all the jobs" pitch. But it's all smoke and mirrors for investors. We have no good reason to expect current AI methods will lead to an AGI that can not only do most human labour, but do so economically competitively.
UBI is a dangerous distraction in this context. It's a mammoth cost to achieve an impoverished quality of life. It may be worth implementing in general, but it absolutely must stay out of the conversation about AI. It's like if the ruling class started announcing that they would like to imprison us all, and your "discussion" about the problem revolved around how we can make our future jail cells feel as nice as possible.
We are allowed to regulate businesses. We simply don't.
What does this mean? It's obviously false on its face.
In the more immediate run, I think the concern is that AI will reduce the ability of workers to collectively bargain and thereby grant the wealthy oligarchs even more control over their workers’ lives.
However, they will also disregard any attempt to slow down or halt AI progress in general, so it isn't like the people wanting to end AI in general are any more likely to succeed than those wanting to do what I propose.
I personally feel my suggestions would be slightly more feasible to gain support for than trying to stop AI completely. The power brokers in control of AI currently certainly aren't going to stop developing and pushing AI, but they might be convinced that sharing the wealth is the only way to avoid massive revolt in the long run. While it is conceivable that the wealthy wouldn't need the masses for labor like they do now in the AI future, they still need to not be killed in a massive uprising when 90% of the population is unemployed and starving. While I know a lot of people think the plan is just to kill off that part of the population, that is not that easy to do even with an army of AI robots, and would likely be cheaper and easier to just share a bit of the productivity. I don't think it will be trivial, but I don't think it is impossible.
UBI has been a major donor priority, at least on the left.
A lot of this will both cost money AND require people to change their jobs, their investments, their equipment, ... And they hate it.
Everyone, including governments will have to adapt.
And to add insult to injury, everything comes from the US and it's really expensive.
Inflation is driven by scarcity. More demand for a fixed/limited resource drives up the price. Historically, every good and service humans bought followed this pattern, so we didn’t even have to consider an alternative.
Already in our current economy, however, we have seen a good portion of our economy shift to things that do not have this characteristic. For example, take something like a video streaming service. The marginal cost for additional demand is small enough to be almost negligible; if everyone in the world decided they wanted a Netflix subscription, there wouldn’t suddenly be a shortage of streams or a run on episodes of The Great British Bake Off. They would have to build more datacenters, but the cost per additional user is tiny compared to almost every other traditional good that came before.
If AI and Robots start doing all work, then this would spread to more of the economy. The increase in productive capacity would severely reduce the limitations that have historically driven inflation. We obviously have to invest in building robots and AI, but once we have enough robots they would be making more of themselves and we would be limited by natural resources, but we could use robots to get more of those, too… and we could focus on clean energy, since we would have plenty of robots to do that work, too.
We erase it and call out the ghouls “creating” that shit, simple. They deserve being called out for creating shit and poisoning our minds.
This is straightforward? This is a colossal task. Monumental. Billionaires own it. That’s the political status quo. You could build something to counter those centers of power. But from what base?
Well-paid software developers have scoffed at or been ignorant of worker organizing for, maybe forever? But I have good paycheck and equity... Now what?
This is the "safety" messaging that OpenAI and Anthropic keep harping on and on, and on about, while whistling a merry tune as they turn around and sell AI to the US military and worse, to the tune of $billions/year already.
The "and worse" needs elaboration, because fundamentally the single biggest cash cow for AI vendors will be (and maybe already is) implementing a dystopian future where everything we say, type, or do will not just be recorded but also: read, analysed, and cross-correlated by unfeeling heartless machines tasked with keeping us in line.
I'm not being paranoid, President Biden said as much, but only in reference to China. If you think only China has motivation to use AI to keep a lid on dissent, I have a bridge to sell you. And if you think the Land Of The Free(tm) will never abuse AI in this manner, well... I have some bad news. You may want to sit down.
Here in Australia, the cyberpunk dystopia is already starting to be rolled out. A customer of ours asked their IT team to hook up a variety of HR-related information sources to their new AI system tasked with making recommendations for hiring, promotion, and demotion.
Welcome to 1984, citizen.
And the scary thing is that you can probably easily sell it to Democratic voters if you track racism scores for people, so you can filter people out of your dating pool or job/rental applications. Most people don't care about privacy as a fundamental right, and they'll roll over and compromise if you give them a way to track what they hate. You just need to make sure it is "bipartisan" and it'll be wildly popular.
The very same CEOs are extremely against social support, any taxes for themselves and any govermental agencies that help or protect people.
How is can this be possibly easiest in the world of Thiel, Musk, Trump, Vance, Palantier and overtone window moving toward economically conservative for years.
Like copyright. All modern LLMs are built on troves of copyrighted material that was used in their training. AI companies are claiming this is fair use, while pretty much all of the copyright holders would strongly disagree. This is going to get litigated for years, but regardless of what various legal systems decide, morally, people can be against this.
And people are already sick and tired of AI-generated content being used to replace human made content, be it on Spotify or TikTok. This is part "AI replacing humans", part "I'm being scammed by lower quality content".
OpenAI: We’re allowed to steal everything to train our AI and you can’t complain
Developer: Ok, I’ll use your AI to train mine
OpenAI: NO NOT LIKE THAT, UNFAIR
You can't put things back in the bag. Perhaps the true underlying social problems are:
1. There's too many humans and not enough jobs.
2. The capitalist system only rewards profit seeking and cost externalization.
3. Our democratic representation myth is dead and buried.
4. Even in the developed world, middle-class security is gone.
So here's my question: given the current global system has failed and is clearly in its death throes, as a pan-national species how can we transition to a less mono-focal economic rationalism driven means of governance and self-organization without turning in to an autocracy or reinforcing negative nationalist bloc-level thinking that will tie us in to the same old human-thump-human stone age ape-ism and environmental cost externalization?
Perhaps AI can help in areas like improved education, improved media, proposals for improved government process or process transition for enhanced efficiency. Enforce transparency and accountability in the halls of power by reducing human process and corruption. Public auditable decision making and public auditable oversight. It's at least potential grounds for partial optimism. The best I can summon under present conditions. Of course, we want to avoid a dystopian global AI autocracy, the technocratic basis for which we have already well established, but if you view the present system as a dystopian human autocracy with the same technocratic basis (an increasingly rational perspective given recent events), then it starts to look more rosy.
If the Epstein class wouldn't go for something like this in a world where they needed workers to produce, the idea that they will when we are surplus to requirement is inconceivable.
“Mythos is too dangerous to release.”
“OpenAI offers a bounty if you can get ChatGPT to teach you how to do a bioterroism.”
“Agentic agents will replace entire categories of jobs. They’ll just be like, gone”
This is all signaling to their customers; no not you on their $20/month plan, the governments and corporations of the world who have deep pockets, fat to trim, and borders to defend and expand.
It’s no surprise that people don’t like AI. It’s not for people.
Then again the CEOs of these companies want to get their company at all cost to society.
In fact it's a very sad story about a 20 year old throwing their life away instead of fighting for what he believes is right through non-violent activism and/or regulations.
Last year I wrote an article asking the very question "Who will be the next Luddites?", National Geographics followed-up months later. I'm sure many before, after or in-between covered the same topic. There is truth to it, we will be impacted but let's not forget we went through this during the industial revolution and we should be better equipped than ever to fight using meaningful non-violent acts and operations.
https://www.linkedin.com/pulse/who-neo-luddites-more-importa...
http://nationalgeographic.com/history/article/luddite-indust...
https://en.wikipedia.org/wiki/Luddite
https://en.wikipedia.org/wiki/Neo-Luddism
One of the more famous examples here in the US is that of the equal rights marches in the 1960s ultimately leading to the end of segregation.
What I'm not sure of, though, is what kind of impact there is on the required percentage of people participating when we have media outlets like Fox News, which was demonstrated to have fabricated images during events like the Black Lives Matter protests to make them look as if they were violent.
1. https://en.wikipedia.org/wiki/3.5%25_rule
MLK Jr.'s Civil Rights protests are an obvious counterpoint to this claim.
Not all sacrifice needs to be all or nothing.
How can we glorify Thomas Jefferson and teach kids about him saying "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants" only to then condemn the spilling of any modern blood? Truly what is the difference between torching a warehouse of toilet paper compared to tossing some tea in the harbor?
How can we condemn one and celebrate the other without being hypocrites?
Propaganda is the difference between rebels and freedom fighters.
You could have written L'Overture instead and it would have been a great example.
We've tried to internally pitch many ideas to the larger organization before but mostly got nothing back.
Finally, one of the various board members talked to my boss and told them that, essentially, it has to be top line growth, not bottom line savings.
We looked this up and it came down to some MBA mumbo-jumbo about how X% of growth is better than that same X% of savings once you run the math (?). Look, I know, that's not how percentages work and I know that savings actually do matter. But in 'I have an MBA-land' the mantra is topline > bottomline.
So, then we started to pitch ideas around growth (new lines, more customer sales, more customers, etc). Which went ... nowhere ... again.
Time goes by again, and another helpful person reaches out and tells us that our ideas are 'not worth considering' as they 'don't meaningfully impact revenue targets'. Again, essentially, just to justify the salary-time that these internal boards spend, the idea has to be net positive. Then it we learned that, no, it has to impact the revenue to 1%. For our BigCo that in the ~$10M ballpark. We do have the customer base to support that, but it is in the revenue ballpark of Atari or the Hypixel servers.
Look, either way, the run-around that I get told is that for AI projects that we pitch internally: 1) Top line growth only 2) ~1% increase in revenue (~$10M).
Now, why anyone would not just go take that ~$10M idea and not just make a company themselves is beyond me, but I don't get paid the big bucks, so who knows.
Still, that is what these BigCos are looking for: Growth in the ~$1-10M range.
It's an expensive route to mediocrity, which doesnt offer an edge in a market where everyone is using the same snakeoil.
So now you're wrangling an "AI" system and you're doing most of the work you would have had to anyway. ...And when you don't it can get really embarrassing.
https://www.abajournal.com/news/article/elite-wall-street-la...
Not the first time, surely not the last. The problem is that so much money is tied up in this thing, and the moment the music stops the bag holders are going to be utterly doomed.
Good news, the plan is for us to be the bag holders as they rush to IPO.
Sam Altman still says, after being the victim of anti-AI violence, that "the fear and anxiety about AI is justified" and "it will not all go well".
People simply refuse to believe that AI companies are serious about this, and get twisted into knots trying to understand why AI companies would choose this messaging under the premise that they can't be serious.
I’m sick and tired of AI hatred without people facing the truth. People hate AI because AI is on a trajectory to replace them and become better than a human. That is the fundamental reality.
Look don’t get angry at me. If you are on HN chances are you’re most likely delusional and completely wrong about AI. The majority of HN called vibe coding useless and said LLM have no potential. Now my company won’t even hire someone who hasn’t used Claude and I haven’t touched a text editor or ide in half a year. Same with the teeming hordes of experts on HN who said driverless cars will never come. All wrong. People on this site need to stop jumping on these band wagons of stupidity and pointless blame games.
Can we talk about that rather than blame corporations for being what they’ve been since before AI? Yeah corporations are psychopaths and corrupt and nobody cares. Same story till the end of time. We are on a cusp of a paradigm shift and your skills as a programmer are about to be utterly trashed because an AI is on trajectory to dominate your skills.
Face reality.
Let’s explore this fundamental reality a bit. The “and” necessitates both parts of the clause be true, but would people hate AI if it became better than them but didn’t replace them? That’s an easy no; Big Blue and AlphaGo didn’t cause mass hatred, and machines have been broadly better than humans in some capacity for centuries - that’s literally why we build machines.
Would humans stop hating AI if it replaced them, but wasn’t able to become better than them? Again, no. So the second piece is both incorrect and unnecessary, and what we’re left with is “People hate AI because it’s on a trajectory to replace them,” which is accurate, but not exactly revelatory; many people have already come to this same conclusion, including in this very comment thread. So the good news about your face reality line is that you’ll find a lot of people already facing that direction alongside you.
Yes they will. Jealousy. But they'd never admit it. What are you proud of? What skill do you value and identify yourself with? Say AI did it 1000x better than you but some law was in place to prevent it from replacing you. You'd love that law, and you'd make up some excuse to to hate AI.
>Big Blue and AlphaGo didn’t cause mass hatred
Excuses. Just think a little rather then finding some obvious surface level reasoning that fits within your own bias. First nobody hates those things because it's only a select niche that takes pride in their chess or go skills. Those people will hate alphago if alphago was a direct challenge to their identity as a player. But laws are in place to prevent that as in tournaments only allow humans. Why are such laws in place? Because go and chess are just games. They produce no intrinsic value so it doesn't hurt the bottom line if you restrict AI in that case.
This isn't the case for programming and any other field out there that can be replaced by AI. Ai will be directly attacking a business skill you use to pay the rent and it is currently challenging my identity as a programmer. And laws to restrict this will be actively fought against because monetarily and utility wise there is actual real world benefits to AI.
But why do I even need to spell this out to you? You're not mentally deficient. You're not stupid. All of this is obvious. Why do I have to literally tell you why your example is biased when it is OBVIOUS. It's because you're lying to yourself. You subconsciously avoided the obvious reasoning above. You chose convenient rationale to fit the narrative YOU want. Nobody hates "alphago" lol, did you see that koreans guy face when alphago fucking dominated his ass? Come on bro.
That is the reality. And you are denying it. When there's two people in disagreement and one of them is lying to themselves... how do we know which one it is? The lie is so convincing that both people believe in it.
I'll tell you the best way to determine this. The best way is to see which persons reasoning aligns with their identity and biases. Which person is constructing a logical scaffold that is optimistic? Because lies are told to cover up the horrors of reality. Guess what? I'm a programmer. I hate AI. But I cannot lie to myself. You? Probably made up all kinds of lies about how you're not afraid of AI taking over your job cuz AI can't do this... or that... or whatever bs to help you sleep at night.
Which company is that? Do let us know so I can make sure to never be your customer.
Stares at poster silently from a lotus position waiting for the enlightenment lightbulb
>I’m sick and tired of AI hatred without people facing the truth. People hate AI because AI is on a trajectory to replace them and become better than a human. That is the fundamental reality.
Nu-bie, come, sit, be silent & reflect. When was the last time a tool was made that truly replaced the wielder? Without the wielder, a tool is nothing, without the tool, the wielder still strides as a beacon of divine potential.
>Look don’t get angry at me. If you are on HN chances are you’re most likely delusional and completely wrong about AI.
Continues staring in silence awaiting the moment of enlightenment
>Now my company won’t even hire someone who hasn’t used Claude and I haven’t touched a text editor or ide in half a year. Same with the teeming hordes of experts on HN who said driverless cars will never come. All wrong. People on this site need to stop jumping on these band wagons of stupidity and pointless blame games.
Nu-bie. Does the man disappear because the machine exists? Or is he redirected according to his nature? What nature consumes a man abandoned by his tribe? Surrounded by hoarders of the necessities & means of life? Reflect on this. Reflect also on the potential capabilities of a group of people that through attention to detail, great patience, and acts of artifice on behalf of their fellows once enabled the animation and thinking of rocks. Think very carefully about this.
>Can we talk about that rather than blame corporations for being what they’ve been since before AI? Yeah corporations are psychopaths and corrupt and nobody cares. Same story till the end of time. We are on a cusp of a paradigm shift and your skills as a programmer are about to be utterly trashed because an AI is on trajectory to dominate your skills.
The corporation is as a cup. It's direction is controlled and agency guided by men. It is the oldest form of AI, with us for hundreds of years. The only thing keeping it in check being the occasional times of great strife during which generations of men wrestle the beast, to remind ourselves of wherein our problems truly originate.
>Face reality.
Nu-bie, it is time for you to resume your chores. You have not been enlightened.
magnet for scum like boosters on X, middle managment types, linkedin ai influences, ppl making fake videos on facebook.
At least crypto does not take away more jobs than it creates, where as we all know AI takes away more jobs and no-one can give a solution or explain what the "new jobs" are.
Because the value from AI is to automate the jobs from humans. Claiming otherwise is being intellectually dishonest. Same goes for defining "AGI".
AI has real value. We can argue about whether the cost is worth the value, whether we're on an exponential improvement curve or not, whether it ends up creating jobs or destroying jobs, but AI is mind blowing science fiction that nobody would have believed you will exist 10 years ago.
Except sometimes when there's a huge black swan event, or when the bubble pops. Such things can result in significant layoffs even though it's a completely different mechanism.
This was said with a straight face like “people love puppies!”.
No self awareness at all.
Meanwhile I saw some survey where only something like a third of Gen Z and lower are pro-AI.
Of course the survey also said like 70%+ of them still used it.
Also, looking at current market situation how many people would be willing to say to their bosses or even publicly that they think AI is quite a lot of bullshit.
My new favorite game at work is "guess if this person is really into AI or they just have to be because their boss is and if they weren't they would get replaced by someone who is" and it's quite hard to say.
And since the "boss" of CEOs are the investors in the stock market, and the stock market is automated to ridiculous degree, is this AI pushing for itself?
You can tell that everyone loves chain buffet restaurants by going to Golden Corral and asking everybody if they are enjoying their meals
Isn't this fundamentally what MBAs do with their time? Keep going with this analysis, because it goes much deeper... In my experience, BI is often a house of cards. A lot of times it's just narrative crafting, just like we're all encouraged to do when we write our resumes.
Can you embellish a story? Can you invent a convincing political narrative? As far as I can tell, that's the fundamental unit of US corporation.
If the industry continue to gleefully ignore public discontent over AI impact on society, I imagine what might happen is a public backlash that would make the post Chernobyl anti nuclear sentiment look tame.
I wish articles like this would at least acknowledge the massive adoption AI has among programmers. It's not comparable to stuff like helping you write the occasional email, which I presume is the baseline for most people outside tech. Making it sound like a minor tool that some people are still just experimenting with completely misses the impact it has already had on software development.
Adoption in particular is a useless metric. They are forced to adopt even if it's not really helping in their case, or if it does help but using it makes them miserable, like being forced to switch jobs from something you enjoy to something you find boring and tedious. And then there's the "expertise debt" that will have who knows what impact in the coming decades.
But there are a lot of areas where AI is helping that people don't see, like in medicine. Drug development, cancer research and early detection, CT and MRI analysis, just to name a few. These uses cases are vastly more important but rarely get discussed. It's important to know that AI isn't this one singular thing or else we risk throwing the baby out with the bathwater.
A person having a negative attitude about AI doesn't mean that they wouldn't keep the parts that are mostly positive if they could.
AI is massively marketed by AI people as a tool to replace your job. So either the AI people are bad at marketing or the gains in other industry are insignificant/ do not generate shareholder value.
When AI produces those meaningful advances in those fields, great, we can start having meaningful discussions about them. The greatest medical advancement of the 21st century is likely mRNA, or maybe GLP-1 for some. Neither were LLM assisted in any meaningful way as far as I know (they predate ChatGPT, perhaps more primitive models were involved in ways I’m not familiar with). Until those advances come, this argument is fanfic.
Plus, in the most morbid way possible: who gives a shit about living longer if they are stripped of their career, are inundated with slop at every angle, and can’t trust any information. These are real problems that AI has already created, unlike the fanfic of ridding cancer.
While I find a Gemini Ultra subscription worthwhile for myself, most of the value is in the fun and entertainment of interacting with a strong API in AntiGravity (usually use Claude models), Gemini App, NotebookLM, etc. It is intellectually interesting and fun.
Can I justify the cost to society for data centers, possibility of US government bailing out the AI tech giants, etc.?
No I can't. I think the Chinese are skunking us. Building cheaper AI is the winning strategy. GLM-5.1 and Deepseek v4 are amazingly effective for much lower inference costs.
Helpful, sure. Would humanity be better off without generative AI? Definitely.
When you use ChatGPT for yourself, you may have a sense that what you see is made up; when someone else that you trust uses it and pronounces the output in a way that suggests it is their own, you are left doing much more complex social math to figure out if your trust in this person or entity can hold. It gets exhausting, personally.
And their company's leadership is famous for compulsively lying. Pardon me if I suspect they might be arriving at that number using creative math.
The people: ??
Investors: Tell us more.
Of course normal people found this incredibly off putting.
We built the most meritocratic and accessible career path possible. If you knew how to code, and you invested in your craft (or didn't!), you were more-or-less guaranteed multiple amazing, well-paying career paths anywhere in the world.
Yet, a cohort of us decided "what if we built this thing that literally does our job? what could possibly go wrong?"
Yeah, this is gatekeeping, but the medical and legal industries have perfected that, and our industry doesn't even require advanced degrees to climb the ladder! (John Ternus only has a B.Eng in MEng!)
Why did we Eric-Andre-meme ourselves?
Because it's even less useful than a washing machine. Unless you trust a frickin' humanoid robot doing your house chores, which is batshit insane as things stand.
Data centres popping up near you probably means higher electricity prices, poor air quality and water problems
Sam Altman is a massive penis, with a gift for saying the wrong thing at the wrong time.
The two things that link them are "rich" people imposing their will on everyone else, publicly.
People, esp. many SWEs, like generating with AI, or more telling, wouldn't want to give it up in their work.
On the other hand, people generally hate consuming the product of gen AI.
Consumer experience = mostly negative
Producer experience = mostly positive
I’m honestly baffled. What’s there not to like?
It’s easy to fixate on the OpenAI and Anthropic-level companies, but the real inescapable flood of AI garbage is coming from the downstream companies building on the core AI providers. Communities like HN have some role to play here. Maybe some peer pressure on AI founders to, maybe, not make the world a worse place?
My wife was shocked to learn how much she liked Claude after these forced experiences with AI.
I am not condoning violence, but claiming it is not a politically effective tactic is disingenuous. I get that columnists are trying to cover their asses, but still.
Violence is the reason slavery ended in the US. Violence brought us civil rights laws. Gay rights. Women's rights. Labor laws. Environmental protection laws.
Every right granted by default to white Christian gentlemen at the founding of this great nation had to be taken in blood by everyone else. That's just how America is. It cannot be trusted to live up to its own standards except at gunpoint.
When, where and how violence is justifiable is a different question, of course. But the premise that "Naturally, violence is never an answer, nor is it a politically effective tactic" is simply false. If violence were politically ineffective, authoritarian states wouldn't use so much of it.
What violence brought about women's rights or environmental protection laws? I suppose protestors destroyed the fur market.
The fact that AI acolytes are positively giddy about the above is just icing on the cake.
And yet, as the will of the people is ignored to the benefit of but few, violence will become the answer.
... and in a classic example, apparently the mere mention of concern regarding the rise in US political violence got this thread flagged. Where can you have a discussion anymore?
Many people here would call Putin's assassin a hero, the important distinguishing factor is whether it's a clear societal good or bad. If it's unclear then it's assumed bad.
I am not disagreeing with you here. But platitudes do nothing to convince people. You need to actually explain why the world is a better place with X politician in it, because it does actually matter.
Once you start shooting, everyone starts shooting. Bystanders get hit. Companies start defending their businesses with private armies. The economy collapses. We all lose.
Countries high in political violence are the worst places in the world to live.
It's just that most voters don't agree with you.
History has repeatedly taught us that violence is usually the answer. I wish it didn't have to be this way, but it is what it is.
This is hugely generalized and a little offensive, but there is definitely a core difference that could be more thoroughly described.
As it stands though the whole "the public hates AI" is about as credible as that phase from a decade ago where random tweets were used to justify any position they wanted to.
billions use windows and gmail but have a poor opinion of microsoft and google both for obvious reasons. I expect the same will be true of AI platforms and the usual suspects behind them.
As we do this, we promise that if we set enough houses on fire, we'll build hell. And imagine how rich we'll be if we sell fuel to keep the hell we built running.
Think back on a time where you and a teammate (or teammates) spent hours or days debating back and forth on different technological or architectural options for trade-offs. How much nuance and detail went into those discussion. We used to take pride in our ability to make careful and measured tradeoffs. And yet with this tech all that is thrown out the window.
What I really hate is agentic customer support, sales etc. - when you have to use them you realize how stupid the workflows, tool call, MCP, and all that garbage that is glued is just to reduce costs instead of churn.
PS: Ironically I'm working on coding an "agentic platform" for the product suite and their backend services. I simply don't feel confident about the product I'm building but I guess it is paying my bills for the moment
The situation might be different in the States, but I'd wager Joe Sixpack, brass fisher in Montana, couldn't care less about GPT-5.5 or whatever Musk is up to these days.
I don’t think Montana fishermen have a broad impact on society, or its decision making. There’s just not that many of them.
I'm ashamed that we don't care more about human dignity. I care about human dignity and wonder if I'm an outlier? Even a tiny pledge and affirmation "Hey, we see you, we are working to bring relief and guaranteed dignity to your lives by doing xyz" would help. Instead when I ask for peace in war[edit: and basic income, anything that is an essential part of dignity[edit 2: and I hear its not possible right now while that isn't said of AI investments] I hear unaccountable leadership dodging the responsibility [of their constituents] and accelerating conflict while their friends' pockets get thicker.
They need to accept far more than that. They need to accept that they may not be able to "create a genuine technology that benefits the public" at all, and that they therefore may be required to stop completely and totally dissolve all their operations if it turns out that is what is best.
Gone is all the experience in clean code, good idioms, etc. All replaced by easily generated shitty code that can be removed and generated again as we please, until it works. No thought about the quality of code itself. Some companies are straight up forcing programmers to live in Claude Code and never even see the code, just write the spec.
It’s disgusting. And the worst part is that you can’t opt-out. If you give even the slightest hint that you don’t like AI you’re seen as a Luddite and you’ll be put next in line for the upcoming layoff.
(a) loss of fulfillment (b) lower quality of output and nobody will care so the world will just "degrade" and (c) a perceived lack of autonomy ("forcing", "you can't opt out") around how adoption itself is executed
Although, full disclosure: I have quibbled with Gemini quite a bit over the trailing comma, which clutters the diff, and buries the lede at code review.
But it's been very gratifying to refer to modules entirely by their role in a given design pattern (eg "driven adapter") and be understood. To define the idiom, and see it adhered to.
But am I operating still at too low a level? Would I be penalized, at these "some companies" for not producing shitty code?
Ah, but in my particularly forward-deployed line, there's always an element of showmanship compelling me to write demonstrable code.
But, also, how can I specify the behavior if I can't name the component? Is it really possible to "vibe" code à sophisticated piece of software entirely from the user's domain terminology? Without any intermediate abstractions in mind? Inconceivable, frankly. There are invisible walls, invisible shapes beneath the surface.
Then again, I'm young enough to have never allocated memory manually in my professional life.
* artisanal, handmade products
* affordable products, not just for the rich
* well-paid workers
This was true of clothing, agriculture, and will also be true of SaaS. I choose affordable products and well-paid workers, but that requires embracing automation.
So we found something much worse than crypto.
You can opt-out of crypto, but you cannot opt-out of AI and have no choice but to participate.
2. flooding social media with obviously fake ai content
3. only billionaires benefiting from it and gloating about it .
Anyone who was in AI before 2022 can tell you about the last cycle that went from 2012-2018 or so when the metaverse failed, but we got tensorflow, pytorch, gpgpus
The cool thing is that every hype cycle generates a lot of really good new AI tech and integrations that persist. This time we got GPTs and diffusion sand splatting
I think this previous cycle will be seen as the penultimate with the next one permanently improving with no scale back.
We’ll be fine. We have survived every winter
What you're describing on Reddit sounds like a broad-based antipathy to AI, which is just... how a lot of people are feeling?
You can criticise their motivation being based in emotions or vibes instead of facts and thoughts, but unless you have evidence to the contrary, it sounds like this is just where people are at on this topic.
When someone hears from these leaders that there will be a white-collar "bloodbath", then sees enshittification in their daily lives from misapplication of the tech, can no longer trust any newly published photo, etc., it's the most rational response.
Unlike vaccines, the patent misbehavior by Monsanto says otherwise about GMO.
For vaccines, a good portion of the population remember vaccines being developed and marketed to help people. Then there are immigrants that remember more recently how life changing vaccines are.
And then nuclear weapons. World would be so much safer if every country had sufficient arsenal of them.
Nothing at this point will make people believe AI is good for the masses.
What will need to happen for people to like AI ? I say they will get real $ month after month to cover more than the inflation, not the dumb tax deductions Trump harps on. In this case, maybe 1,000 USD per month adjusted for inflation yearly from AI will end this trend.
Why a payment ? All they see is the wealth of the top 1% increasing almost exponentially where they are struggling to pay their 'fixed' expenses.
In reality since 2008, the rich has been cashing in while workers have been footing the bill. That is the big issue.
The only people who still look positively at AI, are either the ones working on it/building something with it, or the ones who are profiting from it, kinda like crypto few years ago, and just like how crypto is mostly immediately associated with scams now, I imagine something similar will be associated with AI soon.
Even other tech people that are not directly in the AI industry hate AI, due to all the shortages in chips and prices increasing across the hardware board, from gamers to sysadmins to hobbyists, I mean, the rpi are almost like a fully fledged NUC few years ago.
Edit: to add, did AI improved the average person life? Nope, if not increasing the costs, or tracking and violating their privacy, it did flood the internet with slop, or a frustrating useless AI chat support.. from an average person perspective, it added none to their quality of life, it didn’t make things cheaper, it didn’t improve their travels, it didn’t magically made them teleport, and so on, instead, AI was used for all hostile purposes against average person. Even from technical perspective, have we seen any breakthrough in tech given AI is a “superior” assistant? Nope, software is more shitty and buggy now, and SaaS are even increasing the prices (probably to pay for AI tokens), software developers are saying coding isn’t fun anymore, hardware designs didn’t improve, governments processes still have the beuqacratic system plus AI. Unlike when automation was introduced decades ago, where people did notice an improvement in their quality of life.
This is creative destruction in a whole new sense. Just chugging through genuine (or human) creativity, then training on human prompting, then finally ascending near the cluster of Anthropic/AWS nuclear power plants. And people pay for the pleasure.
Upsides of AI: I can ask it if my farts are caused by the celery I ate earlier
I take your other points, but I can't see the connection there. I've heard that they increase electricity rates in many cases (poorly managed electric utilities that can't build out grid capacity without raising rates for everyone), but not that they're affecting housing.
Next to that there is net congestion. The energy grid is currently critical, if you add a data center that means you will not be able to connect 20 to 30 newly build homes to power. There are currently new homes that are waiting for a connection to the grid before people can live there.
Space. In the densest country of Europe (non-microstate), a hyper scale data center could have been a neighborhood.
Latest point, maybe not the strongest, is construction workers. While construction workers building a data center are different from construction workers building homes, it doesn't really help with the labor shortages in construction if electricians are all busy building data centers.
This is an insane regulation, and I wonder if it was passed by NIMBYs whose actual goal is to prevent the construction of housing near them.
The municipality bought the emissions rights from the farmers that held those 8 cows and the farmers then had to move/remove/slaughter 8 cows.
Welcome to The Netherlands.
If the Dutch government was a bit smarter, they would buy out the farmers and create a mega-campus for ASML, including housing for all those expats.
Edit: I stand corrected, last month ASML was granted permission to expand by 20.000 employees.
I've found that LLMs don't give good advice regarding diet. They just agree with whatever your hunch is.
ChatGPT agreed with my hopeful self that I got diahrreah from VR sickness as opposed to my poor food handling, which it turned out to be