AI - Let's talk about Artificial Intelligence > Likes and Comments
message 1:
by
Gertie
(new)
Jan 30, 2023 11:43AM
We've been seeing quite a surge of AI generated material lately, so I think it's time to start a thread on it. Thoughts?
reply
|
flag
I’m not sure. So far what I have read makes me question the reliability of the information the AI spits out. Going back to garbage in garbage out and how to teach it that photos of people don’t have 10 fingers on one hand or how to write a citation for a paper that comes from the web pages it used to write the paper instead of making the citation up. These are problems I have read about so far. Is it really a good idea to be scraping the internet for the input? There’s so much garbage out there. How to you give AI information that isn’t biased?
Yeah, I'm not sure how it could be made unbiased, since what goes in is ultimately what comes out... (sort of the nature/nurture thing only digital).I've seen some art, recipes, stories, crochet patterns etc. made by AI, and some of it is hilarious, because it gets so much right but enough wrong that it's quite odd.
But I have to think that if something is able to learn, or we program better (learning from mistakes) we could eventually get to somewhere very useful. There's already some stuff coming out that is pretty surprising.
But it's scary too. The more difficult it comes to know what is real (in an image, a video, a voice recording) the more difficult other things become. There's so much not to trust, and what does a world look like when you can't trust anything except what you experience in person (assuming some things).
I'm not even getting into the whole AI that learns and develops opinions and decides humans are a virus we've all read about.
Because of how I want to see the world, when I read books I always prefer the ones where the AI turns out to be an ally/friend of people.
Definitely a fascinating area of study and one that I am attempting to get smarter about each day. I changed jobs a couple of years ago into one that focuses on internet security and am studying for my CISSP (Certified Information Systems Security Professional) with a goal to get it inside of the next 12 or so months.Right now, AI for consumers is at the entertaining stage, but I think it is already being used for serious, but limited use in business, government, and science where the garbage in/out situations can be somewhat controlled.
Serious researchers are talking about things like Asimov's laws of robotics and the trolley problem.
Although I think Elon is full of it and is almost like a carnival barker with his self-driving cars are here, they are coming gradually (*full disclosure: I work for an automotive company and my opinion is not necessarily that of my employer*). Think about something as basic as insurance. Who should pay for insurance for a car when there is an accident caused by AI? Think about the trolley problem. Should your car prioritize your life over that of a pedestrian that runs into the road? Where does that line get drawn?
ChatGPT can do great things already and it is still fairly new (although it definitely has flaws). In cybersecurity, criminals are already using it to translate their spams into English so that it is less obvious when they send you emails. It can write code that can get a criminal 95% of the way there for launching a cyberattack. That's the bad side.
On the plus side, it can help level the playing field in education. Think about a student that has English as a second language. They can use it to help correct their grammar. Think about using it to learn new fields of study (I frequently ask it to "Ask me a question about the CISSP." And it does and I give it a paragraph answer. It tells me if I'm mostly right and it gives me more context especially where the answer is weak. Although the funny answers get the most press, I can tell you that it is probably at least 95% right on those types of questions. You just have to put in the effort to know where to look).
I fully believe that AI is going to lead to changes in the how the economy is structured. Jobs like basic call center or even things like drive through order taking will go away. Manufacturing is already being massively changed with robotics and this will shift it even further.
To me, the big question isn't whether it will lead to huge changes in the next 5-20 years, it is how can we prepare so that people currently doing those jobs will not be left behind in the economy. Will the gap between the top 1% and the bottom 50% continue to skyrocket apart? Or will regulations be enacted to help the transition? And how can those be enacted in a way to not give away the competitive edge to other countries willing to cut corners?
Besides the economy, can the industry be guided to curb the ability of governments to use AI to repress their own citizens - even if they do so using a gilded cage?
The company I work for develops the CIW certification exams and courseware (Chip mentions he's pursuing CISSP - a high end credential in the same space we're in). We've just recently (as in, 2 months ago), did a sprint to produce a two different JTAs (Job Task Analysis) and exams, for "Artificial Intelligence Associate" and "Artificial Intelligence - Data Science". We are now in process of producing the courseware to prepare students for the exam. We also have 2 more AI programs ("Machine Learning" and "Programming" following soon). These programs (in the context of our larger library of offerings) are targeted to the upper high school and community college level students. While I've reviewed our exam content, this in no way makes me an expert in the topic. Nevertheless, here's the impression I come away with: We're (society in general) are using the term "Artificial Intelligence" very loosely, to mean a lot of different things. Those not in the IT field can easily misinterpret it, or think that it can do more than it really can (so far). It has become the latest buzzword and marketing benefit to sell all different kinds of software. (Need a communications tool to handle incoming customer questions? Use our Chatbot powered by AI!) What it really means is the programming and data processing around very VERY large datasets - what was called "data mining" ten years ago. Developing AI tech is really about how to handle the data, how to validate it, how to use it to churn out new data, or predictions, analysis, etc. (and images and video are just a different form of data).
There's lots of fields that can benefit from AI: Weather analysis, any scientific field, retail (ecommerce), finance, health, insurance, etc. And there are the artists: the novelists and movie makers who will find ways to explore how the tech will impact society. And of course, there are always scammers and criminals who will find ways to mis-use it before most people really understand it.
Chip wrote: "To me, the big question isn't whether it will lead to huge changes in the next 5-20 years, it is how can we prepare so that people currently doing those jobs will not be left behind in the economy. Will the gap between the top 1% and the bottom 50% continue to skyrocket apart? Or will regulations be enacted to help the transition? And how can those be enacted in a way to not give away the competitive edge to other countries willing to cut corners?..."I think this is the biggest issue/danger and it's incredibly worrisome. More unemployment.
I knew you all would gobble up this topic!A coworker and I were just discussing the issue of employment and automation, AIs etc. a few days ago. It's going to happen for sure. Ultimately if handled well it can benefit us, but that transition period into say, a universal base income or similar approach, could get very ugly.
Lisa I agree about the terminology. Artificial Intelligence to some people just means a computer and to others it means a digital facsimile of a human intelligence. They really aren't the same thing.
Change to society in this area could end up happening pretty rapidly; it's going to be interesting to see but potentially nerve wracking too.
Well, Google had an ad for its AI chat bot and it failed. It gave an incorrect answer.https://www.businessinsider.com/googl...
“In the ad, a user asks Bard, "What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?"
"JWST took the very first pictures of a planet outside of our own solar system. These distant worlds are called 'exoplanets.' Exo means 'from outside," Bard answers, among other "facts."
However, the first photograph of an exoplanet was not taken by the JWST. It was taken in 2004 by the European Southern Observatory's Very Large Telescope, according to NASA. 2M1207b was the first directly photographed exoplanet, per the space agency.“
So that just goes back to what or where or how is this AI getting it data to answer a question? It either didn’t scrape the internet for enough data or it isn’t capable of looking at two pieces of information and determining how to use it to answer a question.
As I understand it, it works on statistical prediction of text. It is not meant to be fact-based. From the internal model, it was statistically likely that it could make that statement with confidence. It was not based on the software looking at facts and deciding on them. That is why, when it is wrong, it is confidently incorrect. I've heard it described as a "man-splaining" engine. There is a lot of hype around AI these days. The technology is remarkable and has come a long way. However, it needs to do more than this kind of predictive analysis to compare to human intelligence.
Nancy wrote: "How to you give AI information that isn’t biased?"Quite simply, you don't. It's probably the hardest question in AI (even more than "how do you make a genuine Artificial Intelligence!"). Even if we could make it, there's nothing we can do to stop it learning from bad sources. Which, ironically, would make it more capable of passing the Turing test: people learn crap, all the time, and many never learn discernment. If an AI's answers were always right, we'd be sure it wasn't human!
Chip wrote: "I think it is already being used for serious, but limited use in business, government, and science where the garbage in/out situations can be somewhat controlled."
Yes, but... :-)
Business, government, and science all have their examples of feeding bad data to human analysts and getting the results they want. AI makes it possible to do that faster.
"Think about the trolley problem. Should your car prioritize your life over that of a pedestrian that runs into the road? Where does that line get drawn?"
I doubt that's really the problem (it's an ethical problem, but probably not a major legal problem -- laws will be passed to ensure that only either the driver or the pedestrian is at fault, the software will follow the law. e.g., if the pedestrian is always deemed to have right-of-way, then cars will have to drive slowly enough to never hit an unexpected pedestrian!). The real problem there is that laws will always be behind the technology.
"how can we prepare so that people currently doing those jobs will not be left behind in the economy."
We've had robots taking people's jobs for decades now. And still, we lose jobs to Asian companies that are using people! I'm not terribly worried that countries cutting corners on the ethics of their AI will be worse.
Lisa wrote: "And of course, there are always scammers and criminals who will find ways to mis-use it before most people really understand it."
Ain't that the truth.
Nancy wrote: "Well, Google had an ad for its AI chat bot and it failed. It gave an incorrect answer."
I saw that. So, it isn't _really_ a man-splaining engine. If it was, it would have doubled down and insisted that the VLT picture was incorrect or debatable!
One should never let the marketing people quote "facts" (especially when generated by a chat engine) without passing them by someone who would actually know the facts.
I think it's interesting to compare Bard & ChatGPT to the Art generating engines. I've played a little with both, and I think it's fair to say that they're quite similar -- but the real difference is that nobody expects the art generators to be creating photographs ("facts"), while we have been led to believe that the the chat engines give facts. Ronan's totally right -- it's not so much "fact" as "inference". That's a perfectly good start to the scientific method, but not sufficient. It's like a 5-year old answering "Why is the sky blue?" by saying "because it's light out". Totally true, but insufficient.
I'd say Bard's goof is indeed a lack of training. It has the press releases from the Webb, and knows that it did discover exoplanets, but either doesn't have the press releases from the VLT or gave them far less weight. While the chat engines are not "fact based", they do use "facts" -- anything gleaned from the internet. It's how they string those facts together that is statistically predictive (and which facts to choose from inconsistencies is also likely statistical). They absolutely needs to footnote those facts.
I always said I wasn't worried that an AI would ever take my job. I'm still not... but only because I'm retiring this year! I'm still not expecting to see a self-conscious artificial intelligence in my lifetime, but I do think I could see something that could pass the Turing test. [ChatGPT absolutely can't -- I asked it!]
I don't understand the rules about who can see what on facebook, so maybe you can read this, maybe not:https://www.facebook.com/abigail.mcqu...
My friend Abby is a marine biology professor at Plymouth University. ChatGPT wrote her this poem:
ChatGPT: create a poem about plankton with a girl named Abigail
Response: Abigail dove into the sea, Her eyes fixated on what she would see. Small creatures, plankton, So delicate, like a ton Of shimmering stars, they danced with glee.
Their dance was like a spell, A mesmerizing sight, can't you tell? Abigail smiled, in awe, At the beauty she saw, Of these tiny creatures, so hard to dispel.
Plankton are the base of the sea, Nourishing life, so it can be. Abigail knew this well, And it was easy to tell, Her love for them was wild and free.
So here's to Abigail, And the plankton she adores, For without them, it's clear, The ocean wouldn't be here, And that, my friends, is what ocean life is for.
Not, imo, good poetry, but like good poetry — more true than fact.
Derek wrote: "Nancy wrote: "How to you give AI information that isn’t biased?"Quite simply, you don't. It's probably the hardest question in AI (even more than "how do you make a genuine Artificial Intelligenc..."
Thanks Derek. I'm not retiring next year but it is on my horizon (some day I hope!). In the medium term, I don't see AI being as disruptive as the hype but I foresee it creating a whole new set of workers who will be trained at interpreting and psychoanalyzing the AIs. An interesting turn-around from those of us who remember ELIZA!
Ronan wrote: " I foresee it creating a whole new set of workers who will be trained at interpreting and psychoanalyzing the AIs. An interesting turn-around from those of us who remember ELIZA!"Yeah, I was thinking along those lines (and for those who don't remember ELIZA, she could seem very creepily human).
Derek wrote: "Yeah, I was thinking along those lines (and for those who don't remember ELIZA, she could seem very creepily human)."Right. Certainly good dystopian fodder there. Remember Blade Runner's Voight-Kampf test? It seems there is already a way to "detect" ChatGPT:
https://wonderfulengineering.com/thes...
In terms of books featuring AI, Murderbot anyone? All Systems Red, The Murderbot Diaries #1
https://www.goodreads.com/book/show/3...
Mattie wrote: "In terms of books featuring AI, Murderbot anyone? All Systems Red, The Murderbot Diaries #1
https://www.goodreads.com/book/show/3......"
Those are fun books and another one is coming out later this year.
The only thing wrong with Murderbot was that the first few were all novellas and just too darn short! I'm looking forward to the new one.Martha Wells was a fantasy writer first, and I think she was doing reasonably well, but Murderbot shot her into the stratosphere and now her fantasy novels are being rereleased.
Derek wrote: "The only thing wrong with Murderbot was that the first few were all novellas and just too darn short! I'm looking forward to the new one.Martha Wells was a fantasy writer first, and I think she w..."
They are definitely too short! I am looking forward to the next book too.
I absolutely love Murderbot! I started reading her other books because of it. I keep trying to get us to read City of Bones and it's being rereleased by Tor.
I just updated the title of the topic... more and more lately I have been seeing similarities between now and the time of the industrial revolution. It was a major turning point affecting pretty much everyone. But it was painful. Luddites were resistant to technology out of concern for loss of jobs for one thing.
I have been hearing more and more varied opinions on AI... some thing it's taking away jobs and just sucks.
I'm of the opinion that change is inevitable, and it's also painful. The only thing we can do is try to make it as smooth and painless as possible. For example dairy farmers are struggling these days due to less dairy milk and more alternatives, but the long-term end result is a positive (well, depending on which alt milk we are talking about, some are better than others) but the change still wreaks havoc in some people's lives.
Same with AI. It's going to be an adjustment, because AI will indeed, again, be taking over some jobs. Same thing for robots, which are recently in the news.
I guess my concern is... how far do we take it? We don't want to stop thinking for ourselves. There are some serious potential drawbacks and repercussions here.
Transfering over a comment from Rebecca, I gave bad directions before:"I am against AI, it's stealing jobs and peoples intellectual property. Lots of controversy on use of AI. I do agree it could be useful for indie authors to save money, but the fact that it's taking away jobs from voice actors in general is crappy."
I think we are all a little nervous, there is some anticipation. It's something made by people, and we are good at screwing things up, so the way it integrates with society and changes it (how it will affect individuals, like Rebecca said, as well as how it will affect humanity, overall) is a bit of a anxiety-inducing thought.
By far the biggest challenge with AI is that the people “in charge” (almost every fed-level politician and top administrator) have no earthly idea what it is, how it works, or what to do about it. Their scope of comprehension is basically binary: “Promote/incentivize it” or “Ban it.” Watch the congressional testimony sessions about social media a few years back. The legislators don’t even understand how these companies make money, much less the ins and outs of the actual tech.
I am sometimes shocked at how useful and time saving it can be when it works. I am going on a cruise and normally to check the weather, I go to a weather site and just repeatedly put in port locations and scroll to find the date.On a whim, I just typed "For the Norwegian Gem cruise from Sep 27 to Oct 4. please create a table showing date, port, and the weather forecast (highs/lows/precipitation)" and it did (note that to avoid outing myself on a site publicly reachable, I modified dates and ship, but that query works - at least on Microsoft co-pilot).
I use it for work related tasks that save me a ton of time as well. I think that it really is the next phase of the industrial revolution in supercharging the computer revolution.
That said, I agree with Rebecca in terms of risks. It really is blatantly stealing intellectual property and is operating in a wild west type of way. In that way, it reminds me of napster when it came to music streaming. It's available and there is no putting the genie back in the bottle, but there will likely be some major lawsuits that will put guardrails around what it is allowed to legally do.
It may also lead to energy and water issues as huge data centers are built to power the computations needed to do this (several major tech companies have already said that they will miss their environmental targets due to AI expansion).
Luke wrote: "By far the biggest challenge with AI is that the people “in charge” (almost every fed-level politician and top administrator) have no earthly idea what it is, how it works, or what to do about it. ..."And it's SUCH a grey and complicated topic. It's like a lot of things, not inherently good or bad.
As it grows I can only hope we find more ways to be energy efficient so we aren't just creating another environmental problem.
OK let's try this again lol ;) I am against AI, I really don't like that it's stealing jobs and peoples intellectual property, I can see the benefit for Indie authors for audio books, but overall I am pretty anti AI.
Chip wrote: "On a whim, I just typed "For the Norwegian Gem cruise from Sep 27 to Oct 4. please create a table showing date, port, and the weather forecast (highs/lows/precipitation)""I love how you included 'please' in your instruction to the AI :)
Come to think of it, my husband's car says 'please', as in "please turn left at the next junction". It's a BMW.
Chip wrote: "I am sometimes shocked at how useful and time saving it can be when it works. I am going on a cruise and normally to check the weather, I go to a weather site and just repeatedly put in port locati..."Chip,
You've touched on something huge, here, simply in regards to the A.I. Centers.
They seem to represent such an enormous, potential edge for whatever corporation / entity, manages to one up another, and all indications seem to say that one of the key bottlenecks is resources.
Particularly power.
To really simplify things.
Something I'm not sure many apocalyptic / Post human / Rise of the A.I. books / movies talk about.
Many scenarios talk about the A.I. Escaping into the broader "Net", but what we've seen, in reality, are these enormous processing centers, and the race to power them.
Then, you have the great many aerial drone shows in China, one recently having a reported swarm of possibly 8K drones in precise formations creating extremely complex, choreographed, maneuvers.
These things should scare the begeebers out of most people!
Things are moving fast, fast, fast.
Sue wrote: "I love how you included 'please' in your instruction to the AI :)"My wife insists that if I don't say "Please" when I ask Google to set a timer, they're going to come after me when the AI revolution happens.
How I feel about it? Complicated.
"By far the biggest challenge with AI is that the people “in charge” (almost every fed-level politician and top administrator) have no earthly idea what it is, how it works, or what to do about it."
Not just the politicians and administrators. Stephen Hawking, brightest mind of our time, came out against it for reasons that made zero sense. He was brilliant, but… he should have stuck to his field.
otoh, it's just slightly behind crypto mining in the damage it's doing to our climate! (Okay, I might be hyperbolic, but the fact is NOBODY knows how much energy we're using for AI!)
"Bitcoin’s energy consumption is a pressing issue, with annual estimates ranging from 91 to 150 terawatt-hours" https://crypto.com/bitcoin/bitcoin-en...
As for whether it's a good or bad thing: despite the fact that computerization has been taking away blue collar jobs for decades, AI is not going to hurt them at all. The people it is hurting, and will continue to hurt are the creatives. If I'm writing my novel, I can pay someone good money to design a cover. Or, I can ask an AI to design a cover—given how badly some covers match the actual story, I bet my odds of getting something representative are actually a bit better—but it's just going to be recycling images that came from a artists who are not getting paid.
I've heard of lawyers using AI and that's just nuts. Contract law is all boilerplate, but you don't need an AI to do that. But you're going to be in a world of trouble when you let your AI connect the boilerplate and miss a few commas! It's the same for any skilled business. AI can probably help you, but you can't trust it to do your job without careful oversight. Many people who have tried, have already paid dearly.
I'm not worried about SkyNet (or even Neuromancer). These AIs are still stupid. I have no worries about them getting smart any time this century (I was in the computer business for 45 years, and we've been hearing "AI is ten years away" for the whole time). What I am worried about is people assuming they're smart, and believing every word they say without engaging their critical faculties: something that people have been doing less and less of in the last few years.
Derek wrote: "Sue wrote: "I love how you included 'please' in your instruction to the AI :)"My wife insists that if I don't say "Please" when I ask Google to set a timer, they're going to come after me when th..."
Derek,
I Love this,
"My wife insists that if I don't say "Please" when I ask Google to set a timer, they're going to come after me when the AI revolution happens."
LOL,
So rich!
As for the jobs,
You see the impact of "A.I." on creatives.
I saw it in welding.
We are so focused on "Smart", that I think many of us aren't seeing why all these corporations and entities are creating these enormous "A.I." centers.
When I was welding, as early as about 9 years back, there was a great push to transition to Computer aided, robotic welding, or, autonomous welding, or, whatever name is being used.
Machine welding was already a thing where the variables could all be controlled and accounted for.
For the last three years, or so, of my welding career, I worked creating Gloveboxes and RABs (nuclear and pharmaceutical / chemical), and we would have engineers visiting us seeking to gain a bit of insight in our aspect of welding.
My shop had a bunch of holdovers from Rocky Mountain Flats and Coors (Yep, back in the day, Coors built gloveboxes and the like, in order to keep their welders fully utilized. They were big and, they were good).
We were one of the few "hand" shops left in the industry, so I was told, and, from time to time we'd get pieces in from other outfits that relied on the computer assisted machine welders (A welder in a booth monitoring the machines),
Those boxes were tight!
We were getting more and more complex requirements from the various National Laboratories, tighter and tighter tolerances, and it didn't look like the shop would be able to produce some of the things being asked of it.
In the beginning, everything was done the way we were doing it, and, some of the men I worked with (Welders, Testers, Engineers), had been pioneers, or, at least, second and third generation, and they were seeing an end to their trade.
I came from the Traditional Timber Framing world, where, my only power tools, for years, were a chainsaw and a 1/2" drill. And, before those tools, I spent a number of years doing everything by hand. We used the old methods, converted much of our timber from raw logs, right down to the felling of many of the trees.
The automation we are looking at right now, is the tip of a new, "Industrial" revolution.
The machines that replaced my timber framing forebears, were, largely, still operated by people, BUT, far fewer people were needed (Think of how a simple back hoe, replaces dozens of men with pickaxes, then go from there).
We might, now, judge the kind of work those earlier machines took away as being a thing which "saved" those workers, but that is largely because, in this country, we've been conditioned to believe that manual labor of that sort is for the "low".
But, many a people, for thousands and thousands of years, lived productive, satisfying lives earning a living with their hands. Building those frames, that way, helped me understand both the material strengths, as well as understand the engineering behind the traditional designs.
I'm a trucker now, and, our tractors have so many layers of sensors. One of the newest aspects in them, are programs which "learn" your route and, downshift, apply the engine brake, and, actual brakes in response to how IT thinks you should be driving.
What the programs aren't doing, right now, is accounting for weather.
In inclement weather, most of the sensors go out, and,
it sees no difference between an 80 degree, sunny day in Florida, and a, -10 degree night on an icy stretch of I-80 heading into Provo canyon.
Downshifting revs the engine, The ice removes traction, and, applying the engine brake under those conditions is... disastrous.
We've had so many incidents, including the total wipe of a power unit and its trailer, due to the program deciding that the correct speed needed to be, Zero, NOW.
Thankfully, no one else was on the road.
This is happening. In every industry that matters. We are simply in the infant stages of a new industrial revolution. The machines don't need to be "Smart", in the way we see ourselves.
This is a race, and, everything depends on it, that's why you're seeing so called concerns for the environment being thrown out the window by those who, just recently, said all of humanity is doomed, unless we... Xxxx.
Today,
they don't care.
Everything is at stake, and, if they fail, they are, done.
My two cents.
(Sounds like a book in here, hmmm )
I keep thinking of the many books I've read where many of our interactions are replaced by those with machines and AI. E.g. go to the ER, it's a machine, plug your arm in to get it fixed. No humans around except other patients. Need legal advice, again, all digital. Without a human behind it.I think AI can accomplish a lot but I think without an actual human available or behind the wheels it's a bit terrifying.
This is somewhat like when all of our phone calls starting going to menus. The compromise has been that you can ultimately get to a "real person" by hitting zero or somesuch. I hope we can at least maintain something like that.
Gertie wrote: "I keep thinking of the many books I've read where many of our interactions are replaced by those with machines and AI. E.g. go to the ER, it's a machine, plug your arm in to get it fixed. No humans..."Agreed!!!
I'm sitting here, editing a chapter, it's got a good amount of dialogue, and, I haven't been happy with it,And, as I'm working it, I need to hear it,
So, I find myself simply opening my link to the A.I. voice lab, I use, and copy and paste the text in, and, within less than a couple of minutes,
it is there, ready and, speaking to me, in my own, cloned voice, which even fooled my wife (who is blind!).
This was when I realized that, for me, I have been using it as a tool.
My intention is to narrate my own works and I find the lab does a couple of things for me,
First,
I need to hear the words, sometimes, in order to see how they flow, how the words on the page match the images in my head.
Second,
It helps with editing, in the finding of words I've omitted, or, misspelled (Spell correct or auto fill, or, whatnot).
And,
Lastly,
It also acts as something of a training tool for me in learning how to narrate.
When I was welding, where Certs and full pen were so important, my mentors used to seek to get me to emulate the machine welders, consistency was what they were trying to get me to copy.
I compare clips of myself reading a chapter, versus the machine me, in my own voice, reading the chapter and it really points out those flaws, which, are generally that I am forever changing my voice when there is no cause to.
I find it a frustrating thing, this learning to "Read".
Yes, I think the technology of today is Very scary, but, I do think it can be put to use.
There is so much potential good and bad, in the short, but, powerful scene you paint, Gertie.
My immediate image of,
"it's a machine, plug your arm in to get it fixed."
Was of a man,
Stumbling into his hovel, on some asteroid, a lone miner, with no help anywhere but near at hand.
Stumbling in,
through automatic door,
The place a wreck, as only a man, alone, on a mission, years away from home can make it,
fishing out this vital device, thrusting a badly broken, damaged arm into it, before passing out, having time only, to say,
"Fix it."
Tim wrote: "I saw it in welding."I listened to an interesting discussion yesterday, where a professor described AI as, necessarily, "having results we cannot predict". If it's completely predictable (as most welding jobs should be!), it's not AI. What you're describing is automation, and I agree that that has hit blue collar jobs hard, but I honestly can't see AI ever being much use in blue collar fields. Where AI hits in manufacturing is in things like material requirements planning—which, as practiced by people, is part art form.
Your smart trucks, otoh, are AI by that professor's definition. They're making decisions, independent of the driver, and they're not always making the same (or right) decision.
Your AI voice lab is working for you precisely because it doesn't echo back precisely the same thing every time.
I love your descriptions and your digging down on the subject.Orbital welders certainly perform as you say. They are (kind of) as dumb as it gets.
The trouble that one of the engineers, in particular, communicated to me, was that, in attempting to create a welder for some aspects in a naval shipyard,
The human welder was constantly having to adjust.
He was really, working the puddle (I'm forgetting so many terms),
but, it was to complex, (At that time), for the programming to be able to emulate the human.
I'm getting a picture, from much of what you are saying, regarding a mix of, pure mechanical function,
vs,
a melding of the mechanical and,
some level of "decision making", happening in real time
(Such as with the truck thing).
In the case of the programs operating the truck, one aspect seems to be that those who might be doing the programming might simply think, driving is driving, or,
have to much of a focus / bias / blind spot and not be accounting for the complexity and myriad variables involved.
When the "program", itself, begins to recognize, catalog, and adjust for these variables, independently, and, eliminate "human error", in the programming,
Then, True self driving vehicles might be very well in reach.
Am I, finally, hitting on one aspect of A.I.?
Say, an A.I., working Through a mechanical welder, to complete tasks of unknown complexity without need of supervision Or, constant reprogramming / tweaking?
Tim wrote: "I'm sitting here, editing a chapter, it's got a good amount of dialogue, and, I haven't been happy with it,And, as I'm working it, I need to hear it,
So, I find myself simply opening my link to th..."
That is interesting, and definitely a good use of AI, so I agree not all AI is bad but unfortunately there are some that will exploit it. It sounds like it could definitely be helpful for people with certain processing disorders, or people with anendophasia, and things like dyslexia or dysgraphia.
Tim wrote: "Say, an A.I., working Through a mechanical welder, to complete tasks of unknown complexity without need of supervision Or, constant reprogramming / tweaking?"I think that "unknown complexity" bit is pretty key. The space miner shoving his arm in the autodoc has no idea of the complexity of his wound--he likely hasn't even got his spacesuit off to be able to identify it!
AIs will need tweaking, but the same way we do—sometimes externally (e.g., from people) and sometimes by itself ("feedback"). I'm pretty sure what scares people the most is the self-modification!
I know other writers have said they've used AI to help the writing process, and that's not necessarily bad. When it gets bad is when people let AI write their press releases without checking them—but people are capable of that level of stupidity without AI (https://nypost.com/2023/07/18/obscene...)
LOL!You are so right, Derek,
We are so capable in that regard.
And, I love the points you raise, Rebecca.
We are walking a tightrope, eh?
And, it's a long way down.
Rebecca,Here is one for you.
I just published an Amazon Vella story, posted my first episode.
It only needs a cover image, that then gets cropped.
All the cover images are round, and, Vella applies the tittle, not the author.
I'm tight with cash, right now. To many various projects, and, I used the Bing A.I. image tool to produce the image.
When I actually go to publish the Book, itself, on Kindle, I'll hire my Illustrator / formatter to do it.
He does a really brilliant job with all of that, and, his suggestions are far better than what i imagine.
But, I wanted to publish it today, its been on the burner, waiting for about 5 weeks, and, I didn't want to wait any longer, and, with this tool...
It only took about 30 minutes, maybe a bit more, and, I'm happy enough with it for this particular venue.
:/
What do you think?
I'm running into this stuff real time.
Tim wrote: "Rebecca,Here is one for you.
I just published an Amazon Vella story, posted my first episode.
It only needs a cover image, that then gets cropped.
All the cover images are round, and, Vella appli..."
That seems like a practical use for AI art, the issues with AI art are a whole separate problem, I don't have an issue with it as long as the images are not stolen or using other peoples intellectual property. I think a lot of Indie authors use placeholders like that, congrats on the publishing!
The AIs are trained on images hoovered up from the internet - with no regard for copyright. Every image, story and song coming from AI - any of them - is a derivative work of copyrighted material.
Derek wrote: "Tim wrote: "I saw it in welding."I listened to an interesting discussion yesterday, where a professor described AI as, necessarily, "having results we cannot predict". If it's completely predictable (as most welding jobs should be!), it's not AI. "
Yes, and no. Unpredictable, unexplainable results are one part of AI, but not all of it. This is a field of AI research and development that is sometimes called "Black Box AI", where the programming and algorithms are so complex as to be nearly incomprehensible to most of us average humans. It's one particular field, but the term "AI" is also being used to cover a lot of other things, as well.
I find it helpful to think of "AI" as a new term for "Big Data". 8, 10 years ago, we were talking about Big Data in terms of how your web surfing and social media habits were being tracked and sold in order to generate highly targeted advertising at you. Think: you shop for a new table lamp on Amazon, and suddenly, every other ad you see in Facebook is for Lamps Plus. That was Big Data.
A big chunk of AI is doing the same thing now: processing enormous amounts of data, in order to give responses or analyses. This can be applied to marketing, engineering, scientific research, etc. But most of it is still just programming, which is just defining the algorithms, to interpret data.
Cheng wrote: "The AIs are trained on images hoovered up from the internet - with no regard for copyright. Every image, story and song coming from AI - any of them - is a derivative work of copyrighted material."Also, yes and no. There IS some regard for copyright. Companies that fail to respect copyright, when analyzing source content, can and do get sued for violations. Ethical programmers pay attention to their source data that is used to train their programs, and use content that is in the public domain.
Or at least, they try to. Are there violations? Sure. Sometimes accidental. Sometimes deliberate. Always have been people who will cheat, steal or mis-use content. This is why we have copyright law, and new case law will develop as AI issues arise. But there are also ethical content generating programs as well.
In the other thread on AI, I mentioned that I've been using an AI video creation tool in my day job, to create training videos for our customers. The service we use, Synthesia, just announced yesterday that they had to remove about a dozen "voices" from their service, because of some issue with their contract with a third party to license the content. They didn't go into details, and focused instead on how they were correcting the issue (providing alternatives, etc). But this is just an example at the ground level, of how lawful, ethical organizations are in fact making an effort to respect intellectual property.
That's why this AI stuff is so complex. Yes, there's potential for misuse. But there is also potential for good.
Lisa wrote: "Cheng wrote: "The AIs are trained on images hoovered up from the internet - with no regard for copyright. Every image, story and song coming from AI - any of them - is a derivative work of copyrigh..."Wonderful comment,
I was going to ask, Cheng,
What do we do about the copyright aspect,
Just as you posted, Lisa.
Your comment is so on point.
A.I. (or some variation thereof) is here, like it or not, and, it is Not going away.
There seems to be so much at stake.
That you are directly using these tools in your day to day working life, that its in my Semi,
And the list of where its being applied, would touch pretty much everyone's life in some capacity throughout much of the world, Now,
says, we are in the thick of it.
And, it seems to be happening at a faster and faster rate.
My company sees a tremendous need for it on the logistics end, and I agree.
On the shipping side, it can mean the difference between a company continuing to compete and a company closing its doors.
So,
Where do we go, with all of this?
How does this become a tool that allows humanity to flourish and prosper and still allows us to be,
free?
Tim wrote: "Where do we go, with all of this?How does this become a tool that allows humanity to flourish and prosper and still allows us to be, free? "
Same way humanity has always adopted new technology: one person at a time, with fumbling and mistakes and goof ups, and resistance, and other people out there on the bleeding edge plowing ahead anyway, and other people making new laws to plug the loopholes that criminals exploit. It will be messy and things will happen but there's no stopping it.
Lisa wrote: "Yes, and no. Unpredictable, unexplainable results are one part of AI, but not all of it.""Unpredictable" is a necessary precondition to be AI. That doesn't mean that all unpredictable programs are AI.
Calling something AI does not make it AI. Large Language Models are unpredictable, and yet still are not really AI, but I can live with considering them almost there. "Expert systems" are not AI, no matter how much IT people like to call them that. AI is most definitely not Big Data. That's just data, and rarely even information, let alone intelligence.
"But most of it is still just programming, which is just defining the algorithms, to interpret data"
Yeah, we were doing that (albeit on a much smaller scale) in the 80s in banking, and had the gall to call that AI, too.
There really isn't much regard for copyright. Once in a while they get in trouble for disregarding copyright, but after the tools have manipulated the images and text they've been stealling, it's pretty hard to prove the theft.
"But this is just an example at the ground level, of how lawful, ethical organizations are in fact making an effort to respect intellectual property."
It's an example of how a company with insufficient ethics made no attempt to verify that it had any legal right to use those voices. Fixing things after you get caught isn't ethics, it's just the law.
I would bet that in the end, the copyright issues will be "solved" pretty much the way Spotify has "solved" it in the music business: by completely disregarding all existing agreements, and paying out a pittance to the people whose work has been appropriated. Already we have politicians saying that the theft is too important to let copyright law interfere with the work.
By the way, the professor was Yuval Noah Harari and the interview is at https://www.cbc.ca/player/play/video/...
AI is revolutionizing both private and business. It will be a mixed bag as it is in all of these types of things. Some will be hurt and plenty will be helped. It will be very strong in pattern recognition and research. It will revolutionize education and academic endeavors. It will also have its pitfalls and will be misused. Just so we are all on the same page, AI has been around 40 years in various forms.
Apparently I didn't share this link before... very interesting but unsurprising.https://www.media.mit.edu/publication...
The crux of it is that using AI use can affect cognitive development. The irony is, there are ways it can help our development, but what's probably more likely to happen is it's going to negatively impact young people's development. That's definitely concerning since the last thing we need as a species is to grow dumber.
You can expect to start hearing the term "cognitive debt" in the future as it becomes more of an issue. I just hope we find ways to manage its use so that it helps us more than it hurts us. It could go both ways, like with any tool.
It kind of reminds me of some of the books we have read.

