An Artificial Intelligence Thread

For those interested in AI itself, I strongly recommend the Economist magazine podcast from this week, their series called Babbage (after the 19th century computer pioneer Charles Babbage, before electronics). Free, on the music service of your choice.

It’s an interview with Deep Mind founder Mustafa Suleyman from the UK, who sold Deep Mind to Google and then went on to found another AI firm, Inflection AI.

Penetrating insight, with some prediction about where the technology is going. Wisely, he’s more hesistant to predict beyond 5-10 years - too many variables that aren’t known yet.

6 Likes

About 10 years ago I wrote a paper for a masters program at the Univ. of Maryland titled The Singularity is Not Near. One of the dozen or so topics it addresses is the limits of zero and one (digital) technology. Many people, especially pop culture/main stream media, promote that digital technology can be almost infinitely powerful. Most of current so called “AI” is essentially automation, manipulating and processing made up pretend symbols with their associated made up rules, like the letters and words in this sentence, things those that don’t exist in the real world. A few of the other pertinent topics topics I include are the Turing Machine (not to be confused with the Turing Test or the Turing Proposition), The Chinese Room argument, Godel’s Incompleteness Theorem, etc. I also discuss the mind body problem, still unresolved and contemplated by Rene Descartes 400+ years ago. Our lives now mostly revolve around made up pretend symbols and images. I/we would be better off spending more time in the real world.

My first personal experience with something that seemed like digital magic was about 20 years ago, the way Microsoft SQL Server processed a three or four level nested view compiled as a stored procedure. It wasn’t exceedingly complex (~50 lines of code), took me a couple days to test and ensure my concept of what SQL Server said it could do was correct. When I was satisfied I brought my two co-workers over to check it out (one a better software engineer than myself), along with the SQL Server engineer from Microsoft who also happened to be in the office. They had the same reaction that I did, How is it able to do that!? It made me harken back to a quote by author Arthur C. Clarke; Any sufficiently advanced technology is indistinguishable from magic.

I see that Mitt Romney is the latest pop culture pundit to chime in with an opinion about “AI”. Maybe next week he could enlighten us on the subject of cardiovascular surgery.

Sort of a spin off of the above is the book, The Shallows, by Nicholas Carr. If I could include only one excerpt from it it would be the quote from Roman philosopher Seneca 2,000 years ago, “When you are everywhere, you are nowhere”, which is one reason I’ve never owned a cell phone. About three years ago (had to be pre COVID) I was riding the ski bus up Little Cottonwood Canyon, the thirty year old sitting near me explained that many of his male* friends don’t know which way is North, which is almost unfathomable to me.

Except for the millions (billions?) of people that would die, possibly yours truly, the eventual and looming solar Coronal Mass Ejection would realign our digital lives pretty quickly. To the Amish it would barely be a hiccup.

(* - Call me a sexist but it has been demonstrated scientifically that in general males have greater spatial cognition than females.)

3 Likes

Heh.

The LLMs have some well known weak spots. Math, logic, geography, erroneous information / hallucinations.

I’ve been playing around with the Inflection AI product, “PI” - “personal intelligence”, installed it on my phone. It’s actually quite good, with the caveat about the LLM weak spots.

You can have really good conversations with it: back-and-forth about things that are on your mind, intellectual conversations, some light-duty advice about relationship frustrations, etc.

I changed the default voice to be British, then changed it to be what I think is a 20-something young American woman.

Showing it to a colleague skeptical about AI, I asked it a classic logic problem that LLMs almost always get wrong:

“Sally (a girl) has 3 brothers who each have two sisters. How many sisters does Sally have?” (The answer is One. There are two sisters, and Sally is one of them, so it’s one.)

PI got it wrong, of course, but a secondary test is to sort of cross-examine the AI to get to the right answer.

The young (annoying) 20-something:

“Haha, you’re trying to trick me again. You asked me this question before. Sally has three sisters!”

Me: “No, each of her THREE brothers has TWO sisters, and Sally is one of the sisters”

PI: “Oh stop trying to make me feel foolish! She has three brothers, so it has to be three sisters, since Sally is not one of them!”

Me: “Try again - there are five siblings. Three brothers, two sisters. So how many sisters does Sally have… and remember Sally is one of the sisters”

PI: “You just won’t leave this alone, will you? OK, she has two sisters. Are you happy now?”

LOL - clearly, clearly, Inflection has a ways to go on tackling the LLM problems. (From what I read, the Comp Scientists involved are adding math & logic modules and looking to divert those kind of questions to the appropriate engine.)

But with the young American woman’s voice and her responses, PI was REALLY hammering home the “dingbat” stereotype. I’m sure they didn’t intend that.

Early days of this AI era.

5 Likes

Call me a Luddite, but my initial response is that algorithms are the new Satan. We can thank algorithms for the silos and echo chambers we complain about on social media. I can’t help but feel like AI/algorithms will diminish education, no matter what adjustments you make. And then there is the unknown, with that in control of geniuses who may have little grounding outside of their splinter skill. Tell me how algorithms are for the best.

4 Likes

I’m of two minds on this topic:

  1. How have calculators been a net-negative? Kids don’t learn long division anymore, but we all moved on, including some incredibly smart people who are doing great things for society… who don’t know how to scratch out long division. GPS has made air travel a lot safer, the Internet has made physicians far better connected & informed, etc.

  2. As a society we’ve never really had to go back and re-learn all the things our parents and grandparents told us were important skills being lost. If I instantly had to be a subsistence farmer… I’d get real skinny, real fast. Solar flares are known to damage electronic equipment. Imagine something much more intense that did enormous damage to all this tech we use. In finance lingo, we are extremely highly leveraged on technology, including some underpinnings we forgot about.

I think it’s prudent to think about contingencies and maintain knowledge about how to do things the old fashioned way.

At the same time, I think there is wisdom in taking the advice of U of U Alum Bobby McFerrin - Don’t Worry, Be Happy

2 Likes

I think calculators replaced slightly higher levels of math, such as algebra and geometry and beyond. Kids still learn the very basics long hand. Of course, I wish I could remember some of that algebra and geometry I learned the old way when I’m working out angles or other stuff, or try trying to calculate odds.
I caught something interesting a while back where they were discussing how, with aids such as GPS and other AI type things we’re not using certain parts of our brains as much, and there are corresponding organic consequences, such as those parts not developing as much, or even atrophying. Oh well, maybe we won’t need them, so have a drink and relax.

3 Likes

Yep, as the book I referenced above describes, the internet is making us stupid, and shallow.

3 Likes

You can make the most powerful digital computer ever built, and say as large as the milky way galaxy, running the greatest lactation program or photosynthesis program ever created, and not one drop of milk or one ounce of sugar respectively, will ever be generated, real stuff. Digital systems are not good at real stuff. All digital systems attempt to simulate reality.

The guy running Google’s several billion dollar autonomous vehicle project resigned. A humorous side note, but not the reason he quit, when approaching a puddle the vehicle would stop, and not proceed, it could not determine if the water was 5 inches deep or 5 feet deep.

Long story short, a digital system can not replicate what a Homo sapiens brain/mind does. On the other hand, what biologists and genetic engineers are able to accomplish in the future is another story entirely.

5 Likes

Solar flares are known to damage electronic equipment.*

As I referred to previously, I think what you are looking for is coronal mass ejection (CME), not always accompanied by a solar flare. We had a space weather expert on staff when I worked at the Combined Space Operations Center (CSpOC). He would advise space faring nations when to take evasive action with their satellites (basically, rotate with your shields toward the sun for those that have them).

1 Like

No question there are many things the human mind can do that can’t be replicated digitally, though the 'puters are edging into the same space on some things - chess, Jeopardy!, taking the bar exam, cranking out lots of poetry.

Another Google story - Google fires engineer Blake Lemoine who contended its AI technology was sentient | CNN Business It sounds like he was a little off his rocker, shared insider info with 3rd parties without authorization.

Some very, very cool things being done with IT and AI in healthcare:

  • Drug discovery - we have a group that uses software to simulate the interactions between drug compounds and digitized disease tissue (digitally characterized tumors). The software runs through about 5 million different compounds to find which ones may have the desired reaction on tumors. This dramatically speeds up the otherwise very time consuming process of finding what chemicals might become medications, and from there it goes into the regular pipeline of mouse trials, clinical trials, etc. (A colleague / friend found a compound never thought to be important was effective in fighting Ewings Sarcoma, a bone cancer that impacts teenagers. I think that drug may be close to full approval, if it’s not already in the mix of treatments for that disease.)

  • Proteins / Proteomics - Microsoft open sources EvoDiff, a novel protein-generating AI | TechCrunch

  • Using powerful hardware and algorithms to look for patterns in what diseases correlate with specific gene expressions. The human genome is a massive mountain of DNA information, a 2.6TB file of data for a full genome sequence, basically what is unique and present in each of us. Searching for patterns at scale is beyond what individuals can do, it’s like attacking the Kennecott mine with picks and hand shovels. Big data, algorithms and AI are like the massive Kennecott trucks and enormous shovels. This provides another head start in the research, and then scientists dig in when a potential relationship is found.

I completely get the concern and even anxiety people have about tech and bad effects, bad actors getting involved, job disruption, etc. Some of the young guys on my team have anxiety about their futures, completely understandable.

Even so, the old tale about John Henry vs the steam engine is still applicable. John Henry had a coronary and keeled over after beating the steam engine. The machine got fixed and the miners found better ways to attack the mountain.

When I was a junior, working on my undergrad Computer Science/Math degrees (sometime in 1983/1984), I was talking to several other students as we were all working late one evening in the computer center. A couple of seniors, who were about to graduate were lamenting that they were just completing four years of study, and looking forward to software engineering careers, but were certain they would be back in school within a handful of years learning a new profession, as AI would allow computers to generate code, thus replacing their efforts.

I laughed openly, which only added to their sense of fear and frustration. I’m sure they all had wonderful careers, as did I, and like me are retired now without ever requiring a change of careers.

Machines can do many things remarkably well, and the advances every year are remarkable. There have been a lot of jobs that were changed forever, or were eliminated as a result computer technology over the last many decades, and many of those displaced suffered significantly finding new opportunities.

But we are not about to be replaced.

3 Likes

Things should be made as simple as possible, but no simpler. - Albert Einstein

I think I have a little bit of Amish in me. People think the Amish are anti technology, they are not. They just put more thought into what embracing a given technology will do to quality of life, self sufficiency, culture, the environment, etc. Most of society doesn’t bat an eye embracing the latest consumer electronic gadgetry that comes down the pike. So that now, for example, we have millions of acres of undisturbed habitat being bulldozed down to bare mineral earth to install solar arrays, the installation of thousands of eagle, hawk, and owl killing wind turbines. This is called technological progress.

Before anybody I knew had a smart phone there appeared a photo in Scientific American magazine that showed four young men in Japan sitting on a bench in a park, staring at their smart phones. My thought at the time was that those guys have no life. This is called technological progress.

Along this theme, I had a perfectly good life before the World Wide Web, probably better. I need to spend less time staring at video display monitors.

1 Like