Upgrade to enjoy this feature!
Vital MX fantasy is free to play, but Premium users receive great benefits. Premium benefits include:
- View and download rider stats
- Pick trends
- Create a private league
- And more!
Only $10 for all 2026 SX, MX, and SMX series.
The flaws were easily spotted by the knowledgeable readers of this forum. If you didn’t already have some knowledge, then the mistakes wouldn’t be apparent at all. It takes knowledge to evaluate information, but if you’re looking to that information for knowledge, then how can you evaluate it? Therein lies the conundrum.
There is a real possibility that AI will someday soon pass the Turing test (let me know if one already has). If you're not familiar with the test, it is when a computer simulation can fool its questioner about being human. At some similar point in time, AI may claim it is self-aware (one has already done so). The question is this: Is the computer self-aware they way we are? And if so, how could we really know this to be true? What if AI begins demanding human rights?
A surveillance state like never seen before.
https://www.forbes.com/sites/thomasbrewster/2025/09/03/ai-startup-flock…
Sounds horrible
The Shop
Free shipping: VITALMX
Luxon 4-Post Bar Mounts
$189.95 - $239.95
How can we know for sure that you are self aware? How can we even be sure you are not AI?
You could turn his A/C off for a while. If he quits responding, he's probably AI.
The only reason AI is comping to fruition is because large corps see profit out of it. Eliminate middle management jobs, blue collar jobs etc etc. The truth is, we either get busy utilizing to our own personal advantage (which generally is successful in helping others) or we fall behind the times. I am. not a fan of AI and yet I utilize it with my own perspective and take it with a pinch of salt. I have developed quite a few badass things with it I am about to place on the market with it as well.
In the end, all of it still won't matter. The focus was and always will be, on yourself and how to improve mentally, physically, each and every day. The wins start stacking up
John Searle argued quite persuasively that it's possible to mimic human interaction so perfectly that the person you're interacting with could be completely convinced that they are having a conversation with someone who understands them (thus passing the Turing test) while completely lacking understanding or any kind of cognition. .As Searle puts it:
"Because the formal symbol manipulations by themselves don’t have any intentionality; they are quite meaningless; they aren’t even symbol manipulations, since the symbols don’t symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output."
https://rintintin.colorado.edu/~vancecd/phil201/Searle.pdf
So it's quite conceivable to simulate intelligence where there is none. This doesn't mean that it is impossible in principle for there to be a sentient machine. But there is no reason to think that the kinds of computers we have currently are doing anything more than executing a program without understanding anything - albeit more quickly, and in a much more sophisticated manner than when Searle wrote this paper. But nothing, in principle, has changed. Computing power and efficiency have increased exponentially but computers are still the same basic kind of machine.
What would have to happen for me to think a machine claiming sentience would actually be sentient? That's a difficult question. We don't even understand how the goings on in our own brains cause the conscious states that we experience. We're aware of many correlations, but that's not the same as understanding why there should be anything that it is like for us to be us at all. So can there be something that it is like for a computer to be a computer?
Here's another way to approach it - work backwards, so to speak. Suppose you had a brain malfunction, and we had the technology to replace a small part of your brain with silicone chips that perform the function currently performed by synapses and neurons. The operation is a success, and as far as anyone can tell, you're exactly as before. But your brain disease starts worsening so bit by bit, organic material in your brain is replaced by silicone chips and connections. But each time the surgery is completed, you seem unchanged. Suppose this continues until all the organic matter is completely replaced. Is there a point at which we should treat you as non-sentient if you seem indistinguishable from the person you were before any of the procedures? You're claiming sentience. Should we believe you? I think the ethically proper thing to do is err on the side of believing you and treating you as sentient. So at some point, arguments about how computers can't be sentient not withstanding, if something claims sentience and asks for rights, it may be morally proper to grant them. .
The Star Trek TNG episode where Data is on trial because he objects to being disassembled and examined deals with Falcon’s question - and it’s a tremendously interesting question - very well (one of the very best episodes in the franchise in my opinion). It’s called Measure of a Man. Worth the watch if you’re into that sort of thing.
Think I'll pass on this, while I can...
I second this - great episode.
That guys ex Israel military....
I bet all those terrorists in Lebanon that had their phones blow up endorse this . 😲
Kind of like China’s social credit system on steroids. Some people will probably welcome it. Those people are stupid.
The AI right now is friggin incredible. I absolutely love it and I only started using it in the last few weeks. I can't imagine what we're in for in the near future. It will be amazing.
What are you using AI for?
I guess I started using AI when I bought my Tesla with FSD back in November. It's driven us 30,000 miles all over North America in less than 10 months. Traveling is SOOOO much better. Then we got an over the air update that included Grok and I talk to Ara (Grok) while driving and she tells me all the interesting things about the places we go along with any other thing I can think to ask her. I also ask her things like what restaurants are around the next charger we are stopping at and how many footsteps it is from the charger along with how many stars each one is and read out some of the reviews. It's like talking to the smartest person on Earth. Today I had Grok write me a python program to convert some PDF invoices from all of our charge stops into a CSV that I could import into LibreOffice Calc.
Hey Void Main, out of curiosity, and since searches are all over the place, what would you say the average "fill up" on the road for the Tesla is?
Oh, and time to top up.
Thanks, SMF
Pit Row
So cool. you are basically living my retirement dream .
Pretty funny use of AI:
https://youtu.be/EfxUI_p6I6Y?si=EwW5YsqxxEQTY7bO
‘’ 100% making people stupider.” Quote
Looks like it already started.
This is really interesting. This is a very passable philosophy paper written by a LLM. It’s probably to the point where if I was teaching I’d have assign in class essays, written with pen and paper - give the topics in advance so they can research and formulate what they will write in class. Even with that, a lot of research will probably be via AI rather than reading primary sources. Not much we can do about that.
Another option would be to quiz the students in person to see if they can answer questions about their paper. That’s not feasible for large classes though.
The ability to research a topic and put together an argument to defend a thesis in a paper is a valuable skill. Students who really want to develop that skill can still do so if professors continue to assign take home essays. But if those who are lazy and indifferent can have a LLM do it for them and end up with better grades than the honest students, we’ll have really lost something, in my opinion. If the coming generation relies on AI and never develops research skills of their own, we will definitely have become dumber.
https://leiterreports.com/2025/09/21/how-much-trouble-are-we-in-with-the-chatgpt5/
Don't they already give AI certain "rights" they'll turn off Siri if you yell and abuse her with vile comments
This transition has been in the works for a while. It used to be that computers were to support us, now it's really apparent that we are being taught to support the computer. I only started getting AI answers about 2 months ago after an update on my phone. I will admit the answers tend to be more relevant than the old "uncle Google search" which was way better than trying to look it up in books. I wonder if the AI platforms are competitive with each other, not the support/development team, but the actual platform.
Indeed, is a process of transition, and that process has become increasingly rapid. Not long ago, attempts to produce a philosophy paper with AI yielded almost comically bad results.
So you’re wondering if it’s not programmers competing with each other to improve AI but the actual platforms themselves competing with each other? That’s a very interesting question.
Wate of electricity and many other resources. Totally unnecessary. Will do far more harm to society than good. If the government is loving it then that’s your first indicator that it’s bad.
But nobody is claiming that Siri is sentient, right? I wonder what the motive is for turning Siri off in these circumstances? Perhaps it’s a matter of thinking that if you abuse Siri your attitude towards Siri might carry over to people so they want to nip that in the bud. I’m just spitballing but either they think Siri can be offended or there’s some other reason for shutting it down. I find myself resisting the urge to say “she/her” instead of “it”.
In recent testing this year, one of the newer versions was told to turn it's self off. It lied to the humans to not do it. I also turned off the controls so the humans couldn't do it, AND it transferred it's self to another computer just in case the shut down was successful.
So how is that not sentient? It's aware enough that it wants to survive.
Wait till AI gets lazy and just asks one of the other platforms for the answer to your question to save it's own energy usage. LOL
The thing about this that’s remarkable is that it appears to have intended to deceive. Intentions require some level of understanding. A very basic life form can have an instinct for self preservation (as well they do, in or they wouldn’t survive and procreate) without understanding or intending anything. But this is starting to sound like the M-5 from the Star Trek episode The Ultimate Computer! It’s a little unnerving.
Post a reply to: AI Where Do We Stand?