Where goes AI?

  • Active since 1995, Hearth.com is THE place on the internet for free information and advice about wood stoves, pellet stoves and other energy saving equipment.

    We strive to provide opinions, articles, discussions and history related to Hearth Products and in a more general sense, energy issues.

    We promote the EFFICIENT, RESPONSIBLE, CLEAN and SAFE use of all fuels, whether renewable or fossil.

begreen

Mooderator
Staff member
Hearth Supporter
Nov 18, 2005
107,153
South Puget Sound, WA
This is an intelligent yet sobering look at how quickly and far AI has progressed in the past 5 yrs. It is now experiencing double exponential growth and teaching itself. The potential is huge, but the risks are larger at this point because there are no guardrails. Unfortunately, profit is driving deployment without consideration of the effects. The consequences of this disregard could be very serious.

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
 
Last edited:
If you dig back several years in this forum, you'll find me sounding like a card-carrying member of the tinfoil hat-carrying brigade, in several mentions of the coming AI revolution. Now it's here.

I still believe, as I did five years ago, that this is going to upset the white collar workplace much as did automation and outsourcing for blue collar jobs in the 1970's. I'm very afraid for anyone entering the workplace today, or over the next 15 years, as I strongly suspect that much of their schooling and career planning is going to reach obsolescence by the time they hit their mid-career stride. Those in their 40's today, who can remember older employees pushed toward retirement for being less than computer literate 20 years ago, may experience the same for their lack of "AI literacy" 15 years from now.

Long-term, there's enormous potential benefit for society, but there's going to be an awful lot of pain between now and then. It might not even be a stretch to expect a reprisal of our Gilded Age economy, if we haven't been already headed that way the last 15 years, in which the economic gap is opened even farther by those who can vs. cannot afford the best AI in everything from workplace productivity to investment strategy.
 
Yup, a lot of legal assistants and some lawyers will be sidelined. The good thing is that it's making blue-collar jobs like welders, plumbers, electricians, etc. look quite attractive and without the college debt. The maritime industry is also hiring as fast as possible. The smart kids will adopt AI as a tool in their toolkits.
 
The only thing growing faster than AI at this point seems to be AI scare-mongering!

First off, the recent break throughs with large language models will affect many fields where people write/research/reason for a living. How is the freakout different from the Luddites opposing mechanization?

When we got all word processors, 80% of the secretaries were laid off.

Now a bunch of folks whose job was to string together words coherently are scared, but technology has been disrupting society for centuries.

Large language models are not sentient. They are a TOOL.

But but, JOBS! As we get more tools, won't we eventually all be out of a job, and the owners of the tools will get all the money?!?

If you are afraid of that outcome, welcome to Marxism! This is in fact the central concern of Das Kapital, written in 1867 at the dawn of industrialization. If you think we should come up with some system to protect ordinary people from this outcome, somehow, you are technically a Marxist.

NB: Marxism has little to do with Communism, and predates the concept by decades.

----------------------------------

While just tools, large language models CAN solve simple problems. And they have 'controls' placed on them to avoid trouble. In one study, they took a chatbot in the lab and asked it to access a website the required passing an 'I am not a robot' test. The chatbot went to taskrabbit.com, and tried to hire a human to complete the task for it! On the taskrabbit site, the human was suspicious, and asked the bot 'Why don't you just do this yourself?' to which the bot invented a very plausible story about being visually impaired. The human agreed, and completed the task... and so the bot got access to the 'I am not a robot' site!

If that is not a good replacement for the 'Turing Test' I don't know what is!

But remember, it was not a sneaky bot doing that, it was a researcher asking if its bot could figure out how to do that. And then installing controls to ban it from doing that 'outside the lab'.

What people are afraid of is Artificial General Intelligence (AGI) not AI or chatbots. We have not yet developed AGI, and that breakthrough might come in 5 years or take 50 years.... AI researchers give a wide range. BTW, AI researchers gave a similar 5-50 year estimate in 1975 for the time to develop AGI!

AGI is scary, but no one knows when it will appear.

------------------------------------------------------

Stepping back to the big picture... there is the problem of economic growth.

Economic growth over the scale of decades looks exponential. That is already pretty crazy, bc an exponential still has no upper limit... can per average capita incomes grow without limit over long periods of time? Exponential growth of the economy suggests YES.

What about China, with its HUGE exponential economic growth over the last few decades? They went from a 1900 US standard of living to a 2000 US level in about 25 years! Wow! Are they going to blow past us and leave us in the dust? Economics says NO WAY. Developing countries grow on an S-curve (that look exponential early on) that then tops out at a level around where the Developed countries are. Modeling the Chinese data suggests their S shaped curve tops out at the standard of living of current Poland, a fraction of the US level. The CCP has lots of (mostly Wharton trained) economists and they know this. So they have taken to propaganda to keep the people happy.

So, when we say exponential growth of the economy, we are talking about GLOBAL numbers. Individual countries can catch up or fall back due to their good or bad policies or luck. While global growth is much slower than China over the last few decades (similar to US rates of real growth), it does NOT appear to be S-shaped... it doesn't (yet) appear to have an upper limit.

And this is where it gets WEIRD. Looking further back, over a couple centuries rather than decades, we can see that the curve is not exponential, it is hyperbolic! It is an exponential with an accelerating rate of growth! A curve, in fact, that reaches INFINITY at a finite point in time in the mid 21st century. Infinite per capita global income!

It's called 'the singularity'.

----------------------------------------------------

So, maybe the model is non-sense, and all that will change in the future due to some 'DOOM' like a pandemic or global warming.

I personally doubt the DOOM scenario. We will never reach infinity $$/person.year, but we will get (I think) far higher than any of us can imagine right now. Like crazy absurd levels.

What has been driving the hyperbolic growth of the global economy is not population growth, but rather the growth of technology. Each person is getting more productive due to industrialization. 1 coal miner in 2020 can mine as much coal as 100 miners could in 1850. One farmer with a combine can farm as much as 100 farmers with mules in 1900.

Its as simple as that.

And the consequence is that fewer and fewer of us get involved with making things, and more and more of us get information or desk jobs. And guess what, tech is making THOSE jobs more productive too. 1 accountant with a spreadsheet is 10X as productive as an accountant in 1950. A kid in a garage with a good computer can make a short movie with CGI effects that are better than the first Star Wars movie in 1977 that cost millions of dollars and hundreds of people to make.

How does this top out? When 90% of us are doing brain work, with very powerful software tools to support what we do. I'm a scientist and when I trained I had to go the library to hunt down and read references. Now I can make a couple clicks at my desk, and I have search tools to find the right paper in seconds. I am lot more productive than I was in 1990.

But that is NOT where this story tops out.

When we don't have enough brains for all the brain work, the economy will make AGIs that do that brain work too, and the volume of brain work being done (and economic activity) will continue to grow! Will those human brain workers be out of a job? Some, yes. Others will become 'team leaders' of a bunch of AGIs, forming a team that is 10-100X more productive than one well equipped human was before.

And progress will march on. And the owners of the AGIs (if they are expensive to build) will make all the profit. And how society will cope with that will depend on its politics and tax structures. But there will be PLENTY of money and stuff to go around for everyone.

Just like there is now, but even more so. :/

--------------------------

TL; DR: AIs are a step along the way to Artificial General Intelligence (AGI) which does not yet exist even in the lab. The development of those is inevitable, and a natural step in the 'progress' humans have enjoyed for the last 200 years or so. The fruits of that progress are massive, and have the economic potential to enable either utopian or dystopian outcomes. Bottom line, your vote matters!
 
  • Like
Reactions: SpaceBus and Ashful
Open the pod bay doors Alexa.
 
Open the pod bay doors Alexa.

Exactly. The HAL9000 was predicted to come on-line in 1992, in the screenplay written in 1968! We are still waiting.
 
  • Like
Reactions: SpaceBus
Evidently, bureaucracy has not increased in your place of work.

Maybe I should have stayed at universities too rather than natl labs...
 
Evidently, bureaucracy has not increased in your place of work.

Maybe I should have stayed at universities too rather than natl labs...
The couple experiments I did at a Natl Lab (specifically LBL in 1991) were horrific that way. We had to have a union safety officer come and inspect our set up and count the fire extinguishers (or something like that) every time we wanted to turn it on. And it took him an hour to get there. I made the note to never work in a National Lab at that point.

If the problem has increased in the last 30 years... shudder.
 
I once had beamtime at a natl lab (back when I was doing my PhD still in Europe), and we were up on a 10-12 ft platform. I had to go to the bathroom at 1 pm or so and saw the (old!!! and bent over) custodian lady come near. She'd come up every day to wipe the desk and empty the trashcan. The stairs took her a while. So I brought the trashcan down to the bottom of the stairs while going to the bathroom.

Next day I was called into the shift supervisor office and given a talking to. (Job security is not helping others apparently in some places in this country.)

Safety is good. The labs do that thoroughly. Not for safety reasons but to be able to wash their hands of something happens ("we trained them and inspected the set up, so it's not our fault"). But safety inspections are good imo. I had a joint position back in TN. The things I saw happening at the univ. were sometimes bad.
The bureaucracy about money in the labs though is horrific.

It's gotten way worse these 25 years...
 
Last edited:
  • Like
Reactions: woodgeek
Bottom line, your vote matters!
You were on a roll there but you lost me at the end. I'm pretty sure the only show of hands the Singularity will be looking for is those made idle and useless by it. The votes of the Luddites and the Marxists didn't matter. A citizen still lurks in the soul who wants to throw his phone in a pond and swim with the algae. That one's vote has never mattered. I know you've got the facts, but I don't see how they support the techno-optimism that you add to them, any more than my horse sense proves the Luddites were right.
 
If you dig back several years in this forum, you'll find me sounding like a card-carrying member of the tinfoil hat-carrying brigade, in several mentions of the coming AI revolution. Now it's here.

I still believe, as I did five years ago, that this is going to upset the white collar workplace much as did automation and outsourcing for blue collar jobs in the 1970's. I'm very afraid for anyone entering the workplace today, or over the next 15 years, as I strongly suspect that much of their schooling and career planning is going to reach obsolescence by the time they hit their mid-career stride. Those in their 40's today, who can remember older employees pushed toward retirement for being less than computer literate 20 years ago, may experience the same for their lack of "AI literacy" 15 years from now.

Long-term, there's enormous potential benefit for society, but there's going to be an awful lot of pain between now and then. It might not even be a stretch to expect a reprisal of our Gilded Age economy, if we haven't been already headed that way the last 15 years, in which the economic gap is opened even farther by those who can vs. cannot afford the best AI in everything from workplace productivity to investment strategy.
I'll be 40 soon and have enough career left to not be able to ignore AI literacy, which is why I interact with it almost daily on the weekdays. I have already used it in my job (engineering) to help in research. Of course, one can not just copy and paste the response. It is a tool to get started on a path.

As an older millennial working with many in their 60's, I can see who just gave up on technology about 30 years ago, and who kept adapting and learning. Those that gave up are significantly behind in role and salary. AI will be the same for me.
 
  • Like
Reactions: woodgeek and Ashful
You were on a roll there but you lost me at the end. I'm pretty sure the only show of hands the Singularity will be looking for is those made idle and useless by it. The votes of the Luddites and the Marxists didn't matter. A citizen still lurks in the soul who wants to throw his phone in a pond and swim with the algae. That one's vote has never mattered. I know you've got the facts, but I don't see how they support the techno-optimism that you add to them, any more than my horse sense proves the Luddites were right.

Fair, it was a bit of a non sequitur.

I don't think that we get to pick and choose what parts of 'progress' we get to keep. We like antibiotics and free wifi and smartphones, but we are going to pass on industrial food production and AI?

My real point is that the 'scarcity mindset' regarding money/food/resources/energy is going out the window. The zero-sum argument has looked pretty thin over the last few decades, based on economics, and yet underlies much of our politics. Not surprisingly, it appeals to both older folks and younger people who have been frozen out of much of the gains by factors out of their control.

But, the hyperbolically rising tide CAN lift all boats, if we can move past our anachronistic us vs them mentality.

My political view: Most countries around the world have embraced a 'shared prosperity' and growth model/mindset (even with very different implementations). The US lags on this indicator, clinging to doomerism and classism promulgated by our politicians, and fueled in large part by 'us versus them' institutional racism.

IOW, drop the us versus them (racism), embrace the social justice (Marxism). And let the AI-boom (singularity) pay for it all.
 
  • Like
Reactions: SpaceBus and Ashful
I'll be 40 soon and have enough career left to not be able to ignore AI literacy, which is why I interact with it almost daily on the weekdays. I have already used it in my job (engineering) to help in research. Of course, one can not just copy and paste the response. It is a tool to get started on a path.

As an older millennial working with many in their 60's, I can see who just gave up on technology about 30 years ago, and who kept adapting and learning. Those that gave up are significantly behind in role and salary. AI will be the same for me.
About 10 years older than you, and previously on a path to retire by my early or mid-50's, I had been operating in recent years under the thought that I'll be retired before AI makes any serious contribution to the design software or processes upon which my work relies. Now I've hit a sort of reset button by going into business for myself, and giving up a few years salary in the process, so I'm looking at things a bit differently, with regard to my own personal situation.

I see infinite scenarios in which the design tasks I complete everyday could be replaced or heavily aided by even very basic AI available today, and I suspect it will be less than 5 years before the companies that make my software (Dassault Systemes and Ansoft) begin rolling out releases with this aid as a purchased option, as is their habit with all new features. This willing to spend an extra $10k - $50k per user will have access, and be able to do weeks-long design optimization tasks in hours or days, essentially changing the fundamental hourly value of such work and the employees who previously trained to do it.

Woodgeek's post was excellent, very informed and very well thought through, as always. But it doesn't change this one key point:

Long-term, there's enormous potential benefit for society, but there's going to be an awful lot of pain between now and then.
 
Fair, it was a bit of a non sequitur.

I don't think that we get to pick and choose what parts of 'progress' we get to keep. We like antibiotics and free wifi and smartphones, but we are going to pass on industrial food production and AI?

My real point is that the 'scarcity mindset' regarding money/food/resources/energy is going out the window. The zero-sum argument has looked pretty thin over the last few decades, based on economics, and yet underlies much of our politics. Not surprisingly, it appeals to both older folks and younger people who have been frozen out of much of the gains by factors out of their control.

But, the hyperbolically rising tide CAN lift all boats, if we can move past our anachronistic us vs them mentality.

My political view: Most countries around the world have embraced a 'shared prosperity' and growth model/mindset (even with very different implementations). The US lags on this indicator, clinging to doomerism and classism promulgated by our politicians, and fueled in large part by 'us versus them' institutional racism.

IOW, drop the us versus them (racism), embrace the social justice (Marxism). And let the AI-boom (singularity) pay for it all.
This all neglects one rather large (and off-topic, here) issue and that is that the hyperbolically rising (anything) will run into resource (even if only energy) limits that our piece of space rock can sustain.

I do believe there is a limit to growth.
 
Just watched the whole thing. Pushing this technology on the general population in the span of 6 months was certainly a bad idea for society. The pace is so fast I just don't see how our politicians will be able to regulate anything before full integration. Something to keep an eye on.

Here's hoping for a Matrix pod with a view! /s
 
  • Haha
Reactions: Ashful
This all neglects one rather large (and off-topic, here) issue and that is that the hyperbolically rising (anything) will run into resource (even if only energy) limits that our piece of space rock can sustain.

I do believe there is a limit to growth.

Malthus thought there was a limit to growth, and he was wrong.

Before tech exploded a couple centuries ago, the economy of countries followed their population, and their population followed their arable land. So Kings wanted more land and more people on them!

But with tech, 'people' were no longer a limit, bc productivity could be increased per person. But the economy was still bigger the more people you had. And since they weren't making more land, Malthus predicted that growing populations would end in mass starvation.

While there have been famines... Malthus was wrong.

Humans now grow more biomass calories and protein per year for our own use (and to feed our livestock) than the entire biosphere did 500 years ago. And not by a little bit. We grow and eat ad feed SIX TIMES as much as the entire old biosphere!

-----------------------

With tech, productivity was based upon the harnessing of energy, mostly fossil energy. So economic productivity scaled not with workers but with fossil energy services. The amazing advances from 1850 to 1950 (railroads to air and space travel) scaled with a huge increase in energy usage.

Since 2000 or so, the economy has continued to grow while energy use has been flat. Energy and economic growth have decoupled. So energy availability is not a practical limit, like we might have thought in the 1980s or in the Peak Oil thinking 15-20 years ago.

---------------------

What about global warming? We need to switch to renewable power. It is already cheaper (without storage) than fossil energy, and the cost with storage is not prohibitive and falling rapidly. We already have all the tech, and just need to mine the materials and build it. And is that energy not only sustainable, by the time it has replaced fossil energy, it will be cheap enough (with storage) to scale by a significant factor. So tasks that require very large amounts of energy will be back on the table (including CO2 removal).

---------------------

Where does AI (technically AGI) fit in? It is the next act. We are not limited by arable land, food production, finite fossil energy or mineral limits (that renewable energy system relies on earth abundant materials like aluminum and silicon).

So what is the limit? Human brain power and ingenuity. As the information economy matures, the need for info-skilled workers is decidedly higher now than it was in 1980 or 2000. So far we have just switch people from some careers to others. But what about when we can't get more people by switching? Then we will build more ingenuity (AGIs) and keep the progress engine going, just like when we hit the other limits.

---------------------

So, is the Earth finite? Yes. Are there technical limits to (1) land (2) food (3) population (4) skilled workers (5) renewable energy (6) minerals and (7) atmospheric absorption? Yes there are. Will any of these limits actually set a practical limit on economic growth as far as we can see? Nope.
 
One question I have is where will the motivation be to learn. If the chat GPT can get a 90% on the LSAT what am I studying for. We (teachers and professors) now need to use another plagiarism finding tool. Once there are more AI instances where will we be able to check the work against?
 
I was tempted to post on this with my thoughts on AI without watching the video. I am very glad I didn’t. What an eye opener. It is easy for me to see now how the disruption resulting from an uncoordinated race to roll out these models will go far beyond and have impacts far greater than any upheaval in our word based occupations and the economic and social concerns that will go that. It is a technical revolution but not just another one. The future potential and likelihood for misuse, large and small scale is frightening. If we can’t rely on our eyes and ears to distinguish reality we are subject to manipulation including by those who are motivated by greed, power or the intent to do us and our nation harm. The potential for creating chaos in many ways and on a large scale is very real. What the genie will look like or even what it really looks like today has shown to be unknown and clearly unpredictable.
 
I was tempted to post on this with my thoughts on AI without watching the video. I am very glad I didn’t. What an eye opener. It is easy for me to see now how the disruption resulting from an uncoordinated race to roll out these models will go far beyond and have impacts far greater than any upheaval in our word based occupations and the economic and social concerns that will go that. It is a technical revolution but not just another one. The future potential and likelihood for misuse, large and small scale is frightening. If we can’t rely on our eyes and ears to distinguish reality we are subject to manipulation including by those who are motivated by greed, power or the intent to do us and our nation harm. The potential for creating chaos in many ways and on a large scale is very real. What the genie will look like or even what it really looks like today has shown to be unknown and clearly unpredictable.
Yep, I thought like "what's the harm?" But the 2 examples of triangulating people with wifi signals, and actually knowing what someone is thinking...that made it click.
 
Malthus thought there was a limit to growth, and he was wrong.

Before tech exploded a couple centuries ago, the economy of countries followed their population, and their population followed their arable land. So Kings wanted more land and more people on them!

But with tech, 'people' were no longer a limit, bc productivity could be increased per person. But the economy was still bigger the more people you had. And since they weren't making more land, Malthus predicted that growing populations would end in mass starvation.

While there have been famines... Malthus was wrong.

Humans now grow more biomass calories and protein per year for our own use (and to feed our livestock) than the entire biosphere did 500 years ago. And not by a little bit. We grow and eat ad feed SIX TIMES as much as the entire old biosphere!

-----------------------

With tech, productivity was based upon the harnessing of energy, mostly fossil energy. So economic productivity scaled not with workers but with fossil energy services. The amazing advances from 1850 to 1950 (railroads to air and space travel) scaled with a huge increase in energy usage.

Since 2000 or so, the economy has continued to grow while energy use has been flat. Energy and economic growth have decoupled. So energy availability is not a practical limit, like we might have thought in the 1980s or in the Peak Oil thinking 15-20 years ago.

---------------------

What about global warming? We need to switch to renewable power. It is already cheaper (without storage) than fossil energy, and the cost with storage is not prohibitive and falling rapidly. We already have all the tech, and just need to mine the materials and build it. And is that energy not only sustainable, by the time it has replaced fossil energy, it will be cheap enough (with storage) to scale by a significant factor. So tasks that require very large amounts of energy will be back on the table (including CO2 removal).

---------------------

Where does AI (technically AGI) fit in? It is the next act. We are not limited by arable land, food production, finite fossil energy or mineral limits (that renewable energy system relies on earth abundant materials like aluminum and silicon).

So what is the limit? Human brain power and ingenuity. As the information economy matures, the need for info-skilled workers is decidedly higher now than it was in 1980 or 2000. So far we have just switch people from some careers to others. But what about when we can't get more people by switching? Then we will build more ingenuity (AGIs) and keep the progress engine going, just like when we hit the other limits.

---------------------

So, is the Earth finite? Yes. Are there technical limits to (1) land (2) food (3) population (4) skilled workers (5) renewable energy (6) minerals and (7) atmospheric absorption? Yes there are. Will any of these limits actually set a practical limit on economic growth as far as we can see? Nope.
You know your history. However, I am not convinced that examples from the past are able to suggest what will happen in a new situation in the future.

I remain unconvinced that there is no limit to growth. Why? Each and every example you used is from a situation so different in its ingredients, that I feel you're extrapolating beyond their domains of validity. Second, all productivity is associated with undesirable impact on our planet. Space, energy (even if 100% renewable), (mineral) resources, noise, light, etc. Even with 100% recycling and 100% renewable energy, growth entails the need for more resources.
Moreover, perpetual growth means perpetually higher consumption (who else would those products, be they physical or service) be sold to. There is a limit to consumption - even if only due to the limited hours in a day, limited lifetime of the humans, and limited amount of humans fitting on the earth (and yes, that last limit is higher because of higher efficiency in food production, but it's not infinite).

A finite sized planet cannot sustain infinite productivity growth. That is the same as sayin "there is a limit to growth".
 
You know your history. However, I am not convinced that examples from the past are able to suggest what will happen in a new situation in the future.

I remain unconvinced that there is no limit to growth. Why? Each and every example you used is from a situation so different in its ingredients, that I feel you're extrapolating beyond their domains of validity. Second, all productivity is associated with undesirable impact on our planet. Space, energy (even if 100% renewable), (mineral) resources, noise, light, etc. Even with 100% recycling and 100% renewable energy, growth entails the need for more resources.
Moreover, perpetual growth means perpetually higher consumption (who else would those products, be they physical or service) be sold to. There is a limit to consumption - even if only due to the limited hours in a day, limited lifetime of the humans, and limited amount of humans fitting on the earth (and yes, that last limit is higher because of higher efficiency in food production, but it's not infinite).

A finite sized planet cannot sustain infinite productivity growth. That is the same as sayin "there is a limit to growth".

But that is what the OP video is all about! That we can't predict what happens next, in particular the known dangers and the unknowable dangers. But ultimately, it is the realm of _information processing_. Bits do seem a lot less limited than atoms. Do we really think the human brain is the most efficient way to assemble earth abundant atoms for information processing (and by extension cognition and general intelligence)? Or is it just one way of doing that with the palette of DNA and protein?

Jet planes fly faster and higher than falcons. Locomotives are stronger than elephants. Submarines can dive deeper than whales.

Similarly, AGI, unencumbered by the limits of the human brain will drive innovation further and faster, and this will accelerate technology and progress. And as a result it will boost economic output normalized however you choose, per human, per year, per kilowatt-hour, per hectare.

I am not saying that this will be all a good thing or spread around fairly. Just that it is a likely to happen sooner or later.

As for resources, the Earths crust is THICK. 100 million square kilometers of area and 10 kilometers deep. A billion cubic kilometers of rock, and a similar amount of seawater. And we humans use a measly few km^3 per year of fossil hydrocarbons now to power most everything we do.

The weakness of the abundance argument is a few chokepoints. The atmosphere is MUCH smaller, only 10 m thick if we condensed it to a solid (or 10^6 km^3 solid) so we can easily change its composition. And our climate is sensitive to this thin layer's composition and transparency, which we can change with a couple hundred km^3 of solid CO2 (equivalent). Similarly, our topsoil and biosphere also have masses more like 10^5 km^3, and a world with 10 billion humans can easily deplete or damage them.

But we can build a society and an energy system that doesn't emit much greenhouse gases, and recycles those that it does make. We have already built our own private biosphere (agriculture) that dwarfs the one that was here before in terms of productivity. Topsoil? If that becomes a limit, we could raise air protein, and feed 100 billion humans off of renewable power, no dirt or plants required.

The mineral/material side is not limiting at the current population of humanity. We think there are limits bc we carelessly polluted our little atmosphere with CO2 and CH4 before we bothered to figure out how to not do that. And bc we raise our food using VERY inefficient methods largely for 'traditional' reasons.

Can a (sustainable) economy go to infinity? Nope. Can it go to 5X the $$$/human in 30 years or 25X in 60 years? Making our current worries about the cost of EVs or solar power or CO2 scrubbing for climate control seem 'quaint' in a few decades. Sure, why not?

Folks from 100 or 200 years ago would find our (average) lives amazing and wealthy and quasi-magical. Progress hit a hiccup from 1970 to 2010, while it was working out some kinks with the cost (financial and health) of energy and the scaling of information tech. But the next 70 years could easily be as transformative as the 70 that came before 1970, spanning from the Wright brothers to the moon landing.

Of course, those years 1970-2010 were pretty peaceful (on a global average), compared the the earlier part of the 20th century. The rest of the 21st could similarly be a terrible mix of wonders and terrors, both supercharged by 'progress', and yes, AI.
 
Last edited:
  • Like
Reactions: SpaceBus and Ashful
We can't predict what happens next - yet your posts are full of that.

Let's meet again in 250 years to see who was right :cool:
 
  • Haha
Reactions: woodgeek
Sorry for the aside...

Getting more to the gist of the video, which is that 'Gollem-class' AIs will remake or break our social system is much the same way that social media and bots did.... um, yeah, sure.

The speakers have spent years unpacking what social media and smartphones have done to society (for good and ill) and are sounding the alarm that this time it will be the same thing, only much worse, and in ways that we can't imagine!! Scary!

They talk a good game giving us a fresh survey of the wonder and horrors that Gollem's represent. And speculate that the ability of expanded deepfakery to 'break reality' completely (versus it only being dented now). And suggesting that this will make democracy impossible. I find this all a bit tenuous and alarmist to be honest.

They say that we will have to write new laws, and we don't have a history of writing laws 'fast enough' to prevent 'entanglement' with society. I agree with the first part, and am not so worried about the second part.

Technology IS society. Society IS technology. (This is in fact the central point of Marxism.)

New tech is going to 'entangle' with society. That what society does. Old ways of doing things will go away, and many people will be harmed by that. New ways of doing things will be developed, and eventually, painfully and too late, be regulated by new laws.

What do I worry about? Yeah, new AI scams are scary. We will all need to do better with out authentication and biometrics on our private and financial data. Older people will still get scammed, as they are already on a huge scale. But that is ALSO largely a technological problem.

Breaking reality? What about Sherlock's creator Conan Doyle being fooled by a couple of schoolgirls who doctored photos of fairies in their garden? That happened. Doctored photos 'break reality', and have for over a century. The legal system worked BEFORE we had documentary evidence or fingerprinting or video cameras on every corner. It will still work when video and voice evidence is widely disregarded in an AI future.

And while the speakers make a big deal about how they are NOT talking about the perils of the 'AGI apocalypse', they lean hard into fear mongering about exactly that. Their scare quote about human extinction is NOT about chatbots, it is about a future AGI! They are implicitly scare-mongering about AGIs, and then redirecting the fear onto their pet issue... Gollem's harming society... in ways that are nebulously described with an evocative cartoon. Really? Not very logical.

I know, what if a Gollem invents the worlds funniest joke? And then we all literally die laughing! Scary. :p

Their examples about 'emergent' behavior is partially legitimate, since it goes to their thesis of unknown dangers, but it also supports the AGI fear-mongering at the same time, bc they are implying that (somehow) AGI could emerge spontaneously at any time!

Stepping back, while I do believe that AGIs will be invented, and could be dangerous in the long term, I don't think their appearance will result in the instant apocalypse of the human race (per the Terminator movies and many others).

I have often mused how we would KNOW that an AGI has been built (or emerged). Would Google or MicroSoft issue a press release? Like they do for chatbots, but saying 'this time its a real boy'?

Rather, I think that we will know AGIs by their actions they take in the world. The digital world will have not only humans and their primitive bots running around doing what humans do, but there will ALSO be some other actors doing things that humans don't know how to do (like hacking things our best hackers can't hack) or spoofing celebrities or politicians or stealing large sums of money or pulling elaborate 'pranks' for attention.

What will the transition from our current world to that future world (with an internet containing aliens/monsters/friendly wildlife) look like? It will probably be really scary and the process will be shocking even if it is not harmful. The first AGIs will be like a 'first contact' with an alien civilization, but one that has tech that is only slightly better than ours (bc it is built by extrapolating human science and tech). So the video speakers are right to use the 'first contact' metaphor for how transformative the dawn of AGIs will be, but misplace it onto their bugaboos, social media and 'Gollems'

The human hive mind is old, and has had many mind viruses, delusions, shocks and disruptions before the first computer was booted up. These include political and religious delusions (I don't want to offend people), that have led to many genocides over the last 500 years. The science of how to manipulate the human hive mind (marketing) also became highly refined decades ago, and we are all still coping with bad actors using that to cover up their corruption, pollution and lies, leaving us less healthy, less wealthy and less wise.

Social media and Gollems are just more tools. A credulous society (as ours) will further fall victim (as it already has) to these new tools being wielded for marketing and political purposes. We can only hope that a skeptical and logical citizenry will discount the new bigger BS firehose the way it currently often tunes out the current BS firehose.

I for one, rather than assuming that the Gollems will make some super-QAnon or DeepPersuasion or a scam that would fool Einstein, I like to think that our experience with social media was a vaccination. That we as an online population have developed some 'herd immunity' for digital BS, that might make us more resistant to the real Gollems currently being unleashed. Is a kid who plays with a tiktok filter really going to believe video evidence in the same way a 60 year old will? I doubt it.

Or at least that is what I tell myself.
 
  • Like
Reactions: SpaceBus
I for one, rather than assuming that the Gollems will make some super-QAnon or DeepPersuasion or a scam that would fool Einstein, I like to think that our experience with social media was a vaccination. That we as an online population have developed some 'herd immunity' for digital BS, that might make us more resistant to the real Gollems currently being unleashed. Is a kid who plays with a tiktok filter really going to believe video evidence in the same way a 60 year old will? I doubt it.
For some, yes, they will be skeptical, but there are many that suck up the latest dribble, slurs, and gossip like candy. Most are too lazy to fact check and if they do, they check with a source already tainted, possibly crafted by AI in the not-too-distant future. Facts and science are not relevant in this sphere.

My suspicion is that there will be some chaos generated that will challenge societies. Some will adapt and deal with it better than others.
 
  • Like
Reactions: woodgeek