AI Where Do We Stand?

MPJC
Posts
2019
Joined
5/18/2017
Location
CA
Fantasy
9/2/2025 7:52pm
TAUTOG wrote:
You are not wrong about ai being a tool but is the tool worth using? The amount of wrong answers ai comes up with because it has...

You are not wrong about ai being a tool but is the tool worth using? 

The amount of wrong answers ai comes up with because it has to come up with an answer is a failing grade. Will it get smarter and better? Yeah maybe. But it is a computing system. The more wrong answers it comes up with, without being corrected, the worse the system becomes. 

MPJC wrote:
I don’t know that there’s an all encompassing right answer. It depends what you’re wanting it to do for you. Some tasks are suitable for AI...

I don’t know that there’s an all encompassing right answer. It depends what you’re wanting it to do for you. Some tasks are suitable for AI, many aren’t. Sometimes distinguishing between these is tricky because you don’t know what you don’t know. 

AI can supply information, but that doesn’t necessarily yield knowledge. Information generally requires context and background knowledge to interpret. The intellectually lazy are not interested in doing the work necessary to understand what they are presented with, and AI won’t change that one way or the other - though it may make it easier to get away with laziness. 

Speaking of laziness, in a forum like this one, when someone just copies an AI summary into a reply, I’m not generally interested in reading it. On the other hand, an AI post can make a decent conversation starter if done right - the recent one about the physical demands of motocross was pretty effective for that. There were obvious mistakes in the information but it served its purpose as a conversation starter. 

TAUTOG wrote:
The post you mentioned about motocross and physical demands is true and interesting but with major flaws. You are more eloquently at saying what I'm thinking. Like any...

The post you mentioned about motocross and physical demands is true and interesting but with major flaws. 

You are more eloquently at saying what I'm thinking. 

Like any kind of technology; it is what you make of it. Or how you use it. 

 

The flaws were easily spotted by the knowledgeable readers of this forum. If you didn’t already have some knowledge, then the mistakes wouldn’t be apparent at all. It takes knowledge to evaluate information, but if you’re looking to that information for knowledge, then how can you evaluate it? Therein lies the conundrum. 

2
Falcon
Posts
12191
Joined
11/16/2011
Location
Menifee, CA US
9/3/2025 9:48am

There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with the test, it is when a computer simulation can fool its questioner about being human. At some similar point in time, AI may claim it is self-aware (one has already done so). The question is this: Is the computer self-aware they way we are? And if so, how could we really know this to be true? What if AI begins demanding human rights? 

1
9/3/2025 10:10am
early wrote:
A surveillance state like never seen before.https://www.forbes.com/sites/thomasbrewster/2025/09/03/ai-startup-flock-thinks-it-can-eliminate-all-crime-in-america/

Sounds horrible 

2

The Shop

XXVoid MainXX
Posts
8105
Joined
5/25/2012
Location
Schenectady, NY US
9/3/2025 11:19am
Falcon wrote:
There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with...

There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with the test, it is when a computer simulation can fool its questioner about being human. At some similar point in time, AI may claim it is self-aware (one has already done so). The question is this: Is the computer self-aware they way we are? And if so, how could we really know this to be true? What if AI begins demanding human rights? 

How can we know for sure that you are self aware? How can we even be sure you are not AI?

1
SEEMEFIRST
Posts
13481
Joined
8/21/2006
Location
Arlington, TX US
9/3/2025 11:53am
Falcon wrote:
There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with...

There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with the test, it is when a computer simulation can fool its questioner about being human. At some similar point in time, AI may claim it is self-aware (one has already done so). The question is this: Is the computer self-aware they way we are? And if so, how could we really know this to be true? What if AI begins demanding human rights? 

How can we know for sure that you are self aware? How can we even be sure you are not AI?

You could turn his A/C off for a while. If he quits responding, he's probably AI.

2
Zycki11
Posts
7683
Joined
4/1/2008
Location
Edwardsville, IL US
9/3/2025 12:14pm
Falcon wrote:
There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with...

There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with the test, it is when a computer simulation can fool its questioner about being human. At some similar point in time, AI may claim it is self-aware (one has already done so). The question is this: Is the computer self-aware they way we are? And if so, how could we really know this to be true? What if AI begins demanding human rights? 

The only reason AI is comping to fruition is because large corps see profit out of it.  Eliminate middle management jobs, blue collar jobs etc etc.  The truth is, we either get busy utilizing to our own personal advantage (which generally is successful in helping others) or we fall behind the times.  I am. not a fan of AI and yet I utilize it with my own perspective and take it with a pinch of salt.  I have developed quite a few badass things with it I am about to place on the market with it as well. 

In the end, all of it still won't matter. The focus was and always will be, on yourself and how to improve mentally, physically, each and every day.  The wins start stacking up 

MPJC
Posts
2019
Joined
5/18/2017
Location
CA
Fantasy
9/3/2025 1:06pm Edited Date/Time 9/3/2025 1:09pm
Falcon wrote:
There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with...

There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with the test, it is when a computer simulation can fool its questioner about being human. At some similar point in time, AI may claim it is self-aware (one has already done so). The question is this: Is the computer self-aware they way we are? And if so, how could we really know this to be true? What if AI begins demanding human rights? 

John Searle argued quite persuasively that it's possible to mimic human interaction so perfectly that the person you're interacting with could be completely convinced that they are having a conversation with someone who understands them (thus passing the Turing test) while completely lacking understanding or any kind of cognition. .As Searle puts it:

"Because the formal symbol manipulations by themselves don’t have any intentionality; they are quite meaningless; they aren’t even symbol manipulations, since the symbols don’t symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output."

https://rintintin.colorado.edu/~vancecd/phil201/Searle.pdf

So it's quite conceivable to simulate intelligence where there is none. This doesn't mean that it is impossible in principle for there to be a sentient machine. But there is no reason to think that the kinds of computers we have currently are doing anything more than executing a program without understanding anything - albeit more quickly, and in a much more sophisticated manner than when Searle wrote this paper. But nothing, in principle, has changed. Computing power and efficiency have increased exponentially but computers are still the same basic kind of machine.

What would have to happen for me to think a machine claiming sentience would actually be sentient? That's a difficult question. We don't even understand how the goings on in our own brains cause the conscious states that we experience. We're aware of many correlations, but that's not the same as understanding why there should be anything that it is like for us to be us at all. So can there be something that it is like for a computer to be a computer?

Here's another way to approach it - work backwards, so to speak. Suppose you had a brain malfunction, and we had the technology to replace a small part of your brain with silicone chips that perform the function currently performed by synapses and neurons. The operation is a success, and as far as anyone can tell, you're exactly as before. But your brain disease starts worsening so bit by bit, organic material in your brain is replaced by silicone chips and connections. But each time the surgery is completed, you seem unchanged. Suppose this continues until all the organic matter is completely replaced. Is there a point at which we should treat you as non-sentient if you seem indistinguishable from the person you were before any of the procedures? You're claiming sentience. Should we believe you? I think the ethically proper thing to do is err on the side of believing you and treating you as sentient. So at some point, arguments about how computers can't be sentient not withstanding, if something claims sentience and asks for rights, it may be morally proper to grant them. . 

MPJC
Posts
2019
Joined
5/18/2017
Location
CA
Fantasy
9/3/2025 1:24pm

The Star Trek TNG episode where Data is on trial because he objects to being disassembled and examined deals with Falcon’s question - and it’s a tremendously interesting question - very well (one of the very best episodes in the franchise in my opinion). It’s called Measure of a Man. Worth the watch if you’re into that sort of thing. 

2
early
Posts
9782
Joined
2/13/2013
Location
University Heights, OH US
9/3/2025 1:27pm
early wrote:
A surveillance state like never seen before.https://www.forbes.com/sites/thomasbrewster/2025/09/03/ai-startup-flock-thinks-it-can-eliminate-all-crime-in-america/

Sounds horrible 

Think I'll pass on this, while I can...

Gz66nkaWsAAoLZn.jpeg?VersionId=NBpQmBA1FkkuqXZ8dTGz66njfW8AAR2GU
2
TAUTOG
Posts
1561
Joined
1/27/2023
Location
Mohrsville, PA US
9/3/2025 1:33pm
MPJC wrote:
The Star Trek TNG episode where Data is on trial because he objects to being disassembled and examined deals with Falcon’s question - and it’s a...

The Star Trek TNG episode where Data is on trial because he objects to being disassembled and examined deals with Falcon’s question - and it’s a tremendously interesting question - very well (one of the very best episodes in the franchise in my opinion). It’s called Measure of a Man. Worth the watch if you’re into that sort of thing. 

I second this - great episode.

1
9/3/2025 1:34pm

That guys ex Israel military....

I bet all those terrorists in Lebanon that had their phones blow up endorse this . 😲

MPJC
Posts
2019
Joined
5/18/2017
Location
CA
Fantasy
9/3/2025 1:36pm
early wrote:
A surveillance state like never seen before.https://www.forbes.com/sites/thomasbrewster/2025/09/03/ai-startup-flock-thinks-it-can-eliminate-all-crime-in-america/

Kind of like China’s social credit system on steroids. Some people will probably welcome it. Those people are stupid. 

2
XXVoid MainXX
Posts
8105
Joined
5/25/2012
Location
Schenectady, NY US
9/8/2025 1:41pm

The AI right now is friggin incredible. I absolutely love it and I only started using it in the last few weeks. I can't imagine what we're in for in the near future. It will be amazing. 

3
9/8/2025 5:45pm
The AI right now is friggin incredible. I absolutely love it and I only started using it in the last few weeks. I can't imagine what...

The AI right now is friggin incredible. I absolutely love it and I only started using it in the last few weeks. I can't imagine what we're in for in the near future. It will be amazing. 

What are you using AI for?

XXVoid MainXX
Posts
8105
Joined
5/25/2012
Location
Schenectady, NY US
9/8/2025 9:06pm Edited Date/Time 9/8/2025 9:22pm
The AI right now is friggin incredible. I absolutely love it and I only started using it in the last few weeks. I can't imagine what...

The AI right now is friggin incredible. I absolutely love it and I only started using it in the last few weeks. I can't imagine what we're in for in the near future. It will be amazing. 

What are you using AI for?

I guess I started using AI when I bought my Tesla with FSD back in November. It's driven us 30,000 miles all over North America in less than 10 months. Traveling is SOOOO much better. Then we got an over the air update that included Grok and I talk to Ara (Grok) while driving and she tells me all the interesting things about the places we go along with any other thing I can think to ask her. I also ask her things like what restaurants are around the next charger we are stopping at and how many footsteps it is from the charger along with how many stars each one is and read out some of the reviews. It's like talking to the smartest person on Earth. Today I had Grok write me a python program to convert some PDF invoices from all of our charge stops into a CSV that I could import into LibreOffice Calc. 

1
SEEMEFIRST
Posts
13481
Joined
8/21/2006
Location
Arlington, TX US
9/9/2025 6:01pm Edited Date/Time 9/9/2025 6:02pm

Hey Void Main, out of curiosity, and since searches are all over the place, what would you say the average "fill up" on the road for the Tesla is?

Oh, and time to top up.

Thanks,  SMF

9/9/2025 8:01pm
The AI right now is friggin incredible. I absolutely love it and I only started using it in the last few weeks. I can't imagine what...

The AI right now is friggin incredible. I absolutely love it and I only started using it in the last few weeks. I can't imagine what we're in for in the near future. It will be amazing. 

What are you using AI for?

I guess I started using AI when I bought my Tesla with FSD back in November. It's driven us 30,000 miles all over North America in...

I guess I started using AI when I bought my Tesla with FSD back in November. It's driven us 30,000 miles all over North America in less than 10 months. Traveling is SOOOO much better. Then we got an over the air update that included Grok and I talk to Ara (Grok) while driving and she tells me all the interesting things about the places we go along with any other thing I can think to ask her. I also ask her things like what restaurants are around the next charger we are stopping at and how many footsteps it is from the charger along with how many stars each one is and read out some of the reviews. It's like talking to the smartest person on Earth. Today I had Grok write me a python program to convert some PDF invoices from all of our charge stops into a CSV that I could import into LibreOffice Calc. 

So cool. you are basically living my retirement dream .

 

9/19/2025 7:03am

 ‘’ 100% making people stupider.” Quote

Looks like it already started.

2
MPJC
Posts
2019
Joined
5/18/2017
Location
CA
Fantasy
9/28/2025 8:43pm Edited Date/Time 9/28/2025 8:45pm

This is really interesting. This is a very passable philosophy paper written by a LLM. It’s probably to the point where if I was teaching I’d have assign in class essays, written with pen and paper - give the topics in advance so they can research and formulate what they will write in class. Even with that, a lot of research will probably be via AI rather than reading primary sources. Not much we can do about that. 

Another option would be to quiz the students in person to see if they can answer questions about their paper. That’s not feasible for large classes though. 

The ability to research a topic and put together an argument to defend a thesis in a paper is a valuable skill. Students who really want to develop that skill can still do so if professors continue to assign take home essays. But if those who are lazy and indifferent can have a LLM do it for them and end up with better grades than the honest students, we’ll have really lost something, in my opinion. If the coming generation relies on AI and never develops research skills of their own, we will definitely have become dumber. 


https://leiterreports.com/2025/09/21/how-much-trouble-are-we-in-with-the-chatgpt5/

1
9/29/2025 5:05am
Falcon wrote:
There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with...

There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with the test, it is when a computer simulation can fool its questioner about being human. At some similar point in time, AI may claim it is self-aware (one has already done so). The question is this: Is the computer self-aware they way we are? And if so, how could we really know this to be true? What if AI begins demanding human rights? 

Don't they already give AI certain "rights" they'll turn off Siri if you yell and abuse her with vile comments

9/29/2025 5:16am
MPJC wrote:
This is really interesting. This is a very passable philosophy paper written by a LLM. It’s probably to the point where if I was teaching I’d...

This is really interesting. This is a very passable philosophy paper written by a LLM. It’s probably to the point where if I was teaching I’d have assign in class essays, written with pen and paper - give the topics in advance so they can research and formulate what they will write in class. Even with that, a lot of research will probably be via AI rather than reading primary sources. Not much we can do about that. 

Another option would be to quiz the students in person to see if they can answer questions about their paper. That’s not feasible for large classes though. 

The ability to research a topic and put together an argument to defend a thesis in a paper is a valuable skill. Students who really want to develop that skill can still do so if professors continue to assign take home essays. But if those who are lazy and indifferent can have a LLM do it for them and end up with better grades than the honest students, we’ll have really lost something, in my opinion. If the coming generation relies on AI and never develops research skills of their own, we will definitely have become dumber. 


https://leiterreports.com/2025/09/21/how-much-trouble-are-we-in-with-the-chatgpt5/

This transition has been in the works for a while. It used to be that computers were to support us, now it's really apparent that we are being taught to support the computer. I only started getting AI answers about 2 months ago after an update on my phone. I will admit the answers tend to be more relevant than the old "uncle Google search" which was way better than trying to look it up in books. I wonder if the AI platforms are competitive with each other, not the support/development team, but the actual platform.

MPJC
Posts
2019
Joined
5/18/2017
Location
CA
Fantasy
9/29/2025 5:47am
MPJC wrote:
This is really interesting. This is a very passable philosophy paper written by a LLM. It’s probably to the point where if I was teaching I’d...

This is really interesting. This is a very passable philosophy paper written by a LLM. It’s probably to the point where if I was teaching I’d have assign in class essays, written with pen and paper - give the topics in advance so they can research and formulate what they will write in class. Even with that, a lot of research will probably be via AI rather than reading primary sources. Not much we can do about that. 

Another option would be to quiz the students in person to see if they can answer questions about their paper. That’s not feasible for large classes though. 

The ability to research a topic and put together an argument to defend a thesis in a paper is a valuable skill. Students who really want to develop that skill can still do so if professors continue to assign take home essays. But if those who are lazy and indifferent can have a LLM do it for them and end up with better grades than the honest students, we’ll have really lost something, in my opinion. If the coming generation relies on AI and never develops research skills of their own, we will definitely have become dumber. 


https://leiterreports.com/2025/09/21/how-much-trouble-are-we-in-with-the-chatgpt5/

ToolMaker wrote:
This transition has been in the works for a while. It used to be that computers were to support us, now it's really apparent that we...

This transition has been in the works for a while. It used to be that computers were to support us, now it's really apparent that we are being taught to support the computer. I only started getting AI answers about 2 months ago after an update on my phone. I will admit the answers tend to be more relevant than the old "uncle Google search" which was way better than trying to look it up in books. I wonder if the AI platforms are competitive with each other, not the support/development team, but the actual platform.

Indeed, is a process of transition, and that process has become increasingly rapid. Not long ago, attempts to produce a philosophy paper with AI yielded almost comically bad results. 

So you’re wondering if it’s not programmers competing with each other to improve AI but the actual platforms themselves competing with each other? That’s a very interesting question. 

mvd61
Posts
1191
Joined
10/15/2021
Location
Brandon, SD US
9/29/2025 6:03am

Wate of electricity and many other resources. Totally unnecessary. Will do far more harm to society than good. If the government is loving it then that’s your first indicator that it’s bad. 

7
MPJC
Posts
2019
Joined
5/18/2017
Location
CA
Fantasy
9/29/2025 6:09am
Falcon wrote:
There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with...

There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with the test, it is when a computer simulation can fool its questioner about being human. At some similar point in time, AI may claim it is self-aware (one has already done so). The question is this: Is the computer self-aware they way we are? And if so, how could we really know this to be true? What if AI begins demanding human rights? 

ToolMaker wrote:

Don't they already give AI certain "rights" they'll turn off Siri if you yell and abuse her with vile comments

But nobody is claiming that Siri is sentient, right? I wonder what the motive is for turning Siri off in these circumstances? Perhaps it’s a matter of thinking that if you abuse Siri your attitude towards Siri might carry over to people so they want to nip that in the bud. I’m just spitballing but either they think Siri can be offended or there’s some other reason for shutting it down. I find myself resisting the urge to say  “she/her” instead of “it”. 

9/29/2025 6:22am
MPJC wrote:
This is really interesting. This is a very passable philosophy paper written by a LLM. It’s probably to the point where if I was teaching I’d...

This is really interesting. This is a very passable philosophy paper written by a LLM. It’s probably to the point where if I was teaching I’d have assign in class essays, written with pen and paper - give the topics in advance so they can research and formulate what they will write in class. Even with that, a lot of research will probably be via AI rather than reading primary sources. Not much we can do about that. 

Another option would be to quiz the students in person to see if they can answer questions about their paper. That’s not feasible for large classes though. 

The ability to research a topic and put together an argument to defend a thesis in a paper is a valuable skill. Students who really want to develop that skill can still do so if professors continue to assign take home essays. But if those who are lazy and indifferent can have a LLM do it for them and end up with better grades than the honest students, we’ll have really lost something, in my opinion. If the coming generation relies on AI and never develops research skills of their own, we will definitely have become dumber. 


https://leiterreports.com/2025/09/21/how-much-trouble-are-we-in-with-the-chatgpt5/

ToolMaker wrote:
This transition has been in the works for a while. It used to be that computers were to support us, now it's really apparent that we...

This transition has been in the works for a while. It used to be that computers were to support us, now it's really apparent that we are being taught to support the computer. I only started getting AI answers about 2 months ago after an update on my phone. I will admit the answers tend to be more relevant than the old "uncle Google search" which was way better than trying to look it up in books. I wonder if the AI platforms are competitive with each other, not the support/development team, but the actual platform.

MPJC wrote:
Indeed, is a process of transition, and that process has become increasingly rapid. Not long ago, attempts to produce a philosophy paper with AI yielded almost...

Indeed, is a process of transition, and that process has become increasingly rapid. Not long ago, attempts to produce a philosophy paper with AI yielded almost comically bad results. 

So you’re wondering if it’s not programmers competing with each other to improve AI but the actual platforms themselves competing with each other? That’s a very interesting question. 

In recent testing this year, one of the newer versions was told to turn it's self off. It lied to the humans to not do it. I also turned off the controls so the humans couldn't do it, AND it transferred it's self to another computer just in case the shut down was successful.

 So how is that not sentient? It's aware enough that it wants to survive.

1
9/29/2025 6:25am

Wait till AI gets lazy and just asks one of the other platforms for the answer to your question to save it's own energy usage. LOL

MPJC
Posts
2019
Joined
5/18/2017
Location
CA
Fantasy
9/29/2025 6:42am
ToolMaker wrote:
This transition has been in the works for a while. It used to be that computers were to support us, now it's really apparent that we...

This transition has been in the works for a while. It used to be that computers were to support us, now it's really apparent that we are being taught to support the computer. I only started getting AI answers about 2 months ago after an update on my phone. I will admit the answers tend to be more relevant than the old "uncle Google search" which was way better than trying to look it up in books. I wonder if the AI platforms are competitive with each other, not the support/development team, but the actual platform.

MPJC wrote:
Indeed, is a process of transition, and that process has become increasingly rapid. Not long ago, attempts to produce a philosophy paper with AI yielded almost...

Indeed, is a process of transition, and that process has become increasingly rapid. Not long ago, attempts to produce a philosophy paper with AI yielded almost comically bad results. 

So you’re wondering if it’s not programmers competing with each other to improve AI but the actual platforms themselves competing with each other? That’s a very interesting question. 

ToolMaker wrote:
In recent testing this year, one of the newer versions was told to turn it's self off. It lied to the humans to not do it...

In recent testing this year, one of the newer versions was told to turn it's self off. It lied to the humans to not do it. I also turned off the controls so the humans couldn't do it, AND it transferred it's self to another computer just in case the shut down was successful.

 So how is that not sentient? It's aware enough that it wants to survive.

The thing about this that’s remarkable is that it appears to have intended to deceive. Intentions require some level of understanding. A very basic life form can have an instinct for self preservation (as well they do, in or they wouldn’t survive and procreate) without understanding or intending anything. But this is starting to sound like the M-5 from the Star Trek episode The Ultimate Computer! It’s a little unnerving. 

2

Post a reply to: AI Where Do We Stand?

The Latest