Loading...
  OR  Zero-K Name:    Password:   

Thoughts on AI?

34 posts, 915 views
Post comment
Filter:    Player:  
Page of 2 (34 records)
sort
6 months ago
Out of curiosity, what are the thoughts of everybody on the current AI revolution? is it good? is it bad? and what should one do to survive in the changing world?
Totally not stressed about the future >w<
+3 / -0
6 months ago
Use it for the betterment of zero-k kind. One small step for coders. One big leap for zero k. Otherwise no opinions.
+2 / -0
6 months ago
Basically I want Z-K IRL so AI the right step.

If we go to far it becomes Total Annihilation though. I for one welcome our CORE masters and wish for the destruction of the ARM rebels if my toaster asks.
+2 / -0
6 months ago
Curious about answers
+2 / -0
6 months ago
It is silently taking over the world ;)
+1 / -0
6 months ago
AI is only as smart as it's dumbest operator.
+0 / -0
6 months ago
Current AI has issues explaining how to replace the batteries in a remote without patronizing and wasting 6 paragraphs on the ethics of buying disposable batteries. If they put that into a robot, it'll be the end of the world for whoever is wasting money funding that.
+1 / -0


6 months ago
Any tool that can help us achieve better & faster results will always be good, all of our evolution has been about making processes faster and more efficient, and AI is no different.

That being said it is still far from being a consistent tool let alone a good one in many scenarios. It is the new calculator, and as such you'll have to learn how to use it (if you can) but it can't in any capacity help you if you don't already know what you're kind of looking for. And it will be like this for many years I believe because it's hard to make an intelligence "transformative" when we can't even understand our own.

Though I must say, can't wait for NPC's with AI generated voice lines lol
+1 / -0
LLMs have ruined drivel for me

Previously it was like "does this person even have any point with their bloviating logorrhea"

Now it's "did they just prompt chatgpt to write this?"
+2 / -0

6 months ago
the "I" in "AI" does not deserve its name. currrent "AI" is just an ultrafast mashup of a giant mountain of data, trying to convince you it is human-like. nothing new comes out of it, it just baffles tiny and slow human brains with something, they might not compile themselves within their given limited time.

+5 / -0
It is very hard to judge. While LLMs and generative computer vision models are crazy useful tools, those people calling for halting their development, etc., are needlessly paranoid. Those models have no agency of their own, the models themselves are harmless (however, spreading fake news and such will be already is much easier with them and that is a big threat, but that won't be slowed by halting their development). All of it is just a bunch of statistics applied to a huge pile of data.

Now, their strength depends on how many task can be "solved" with pure statistics; I bet a lot of people thought this approach can't get us that far in language generation, but it did. ChatGPT and others certainly have some interesting emergent capabilites - will it be enough to correct scientific articles in, let's say, 2-5 years, or will we need something stronger than transformers? If it understands how code works and can write unseen code to deal with new tasks, why couldn't it grasp some physics concepts?
Time will tell. I personally think something a bit stronger will be needed (pure text and model capacity can only get us so far) but we might be on the right track.

What I thought was interesting was that reinforcement learning (AlphaGO - destroying that GO master, AlphaStar - beating the strongest StarCraft player, ...). But after having a (very short, potentially wrong) look at it, it seemed to me that it is just this old Agent/Graph search paradigm enhanced by some neural networks - it feels different from how humans think; we will need imho a different technique to create a model that can be as general as humans are and that can get good in games/driving/ThisTypeOfTask only after a few trials. Still, it found its use in protein identification, perhaps there are other niches where it can be as effective as well.
+1 / -0

6 months ago
So long as there's no conquest nor war, I imagine AI to be an extension of ourselves.
We all came from a single point.
Now this point creates a new child.
+0 / -1
6 months ago
I am overall a bit skeptical as I have kind of seen an association between the level of enthusiasm about AI/LLM and the level of knowledge in various fields (lot of very very enthusiastic people were the ones that knew fewer things). Then after X months same people were blaming "model changes" for "bad performance" (which my guess is was just them realizing it is not as great),

That being said there is true potential in the LLM-s, but for now we don't have a clear grasp on how hard it is to use that potential. I am using LLM-s for some projects and I am baffled both by how well they work sometimes and how bad they work in couple of cases. Reproducibility and stability seem to be complex issues at this point.

One aspect mentioned rarely is that some of these models require huge computing resources. I ran couple of models on desktop and quality was much lower than what you would get via an API in the cloud. There is still lot of work until someone makes some "device" that works with comparable efficiency to a human brain (ok, there are various useless things attached to the brain, but overall it does not consume much energy)
+1 / -0

6 months ago
Note on the GO ai, it has been beaten with apparent ease by exploiting some adversarial attack. https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/
This is also how players eventually beat the Dota OpenAI bots, both the 1v1 mid(succeptable to creep cutting) and the OpenAI 5 team bot.

If humans build a ship, we say the ship is safe when we inspect the hull and verify there's no holes. If there is a hole, we have different quality fixes for it, and good technique will make the ship seaworthy again. An AI model is like a ship, but instead of 3-dimentional it's N-dimentional, where N might be like 1402. Learning models take a configuration of this N-dimentional space and simulate if the ship is sea worthy by passing it through known seaworthy tasks. It iterates on the N-dimentional configuration until human observers say it's likely enough to be a good boat in the N-dimentional sea. However, there's little guarentee that the ship is actually water tight. Maybe the AI just made a long hallway and added a bunch of water pumps to keep the water generally out. So we put the N-dimentional ship out to sea, discover a leak(See: Jailbraking chatgpt with DAN) and tell the learning model to patch the ship to fix the leak. However, the N-dimentional ship probably has more leaks.
+4 / -0
6 months ago
As long as AI is developed by human programming, AI will be as flawed as us. So i think its best we start AI developing AI developing AI developing AI developing AI for many generations and have it design their own optimal processing units. That way there will be a moment AI will truly surpass us at any aspect and they might even be worthy of ruling us so we can finally have peace in the world. God bless AI!!
+1 / -1
6 months ago
USrankAdminSteel_Blue: your ship example reminded me of something. I do not have much experience with ships, but I imagined the requirements are pretty clear (ie: put it on water, don't go to bottom, be able to steer). At some point I stumble upon: https://en.wikipedia.org/wiki/Rogue_wave which is overall an interesting read, but will quote two things:

In 1826, French scientist and naval officer Captain Jules Dumont d'Urville reported waves as high as 33 m (108 ft) in the Indian Ocean with three colleagues as witnesses, yet he was publicly ridiculed by fellow scientist François Arago. In that era, the thought was widely held that no wave could exceed 9 m (30 ft)

By 2007, it was further proven via satellite radar studies that waves with crest-to-trough heights of 20 to 30 m (66 to 98 ft) occur far more frequently than previously thought.[36] Rogue waves are now known to occur in all of the world's oceans many times each day.

So the first time people understood that ships must withstand waves so tall was somewhere after 2000. In my opinion that shows that while we had "some process" for determining ship safety the process was inadequate considering the real world.

The amazing part about AI is that even with as many holes as it has (and I agree, it has many) allows (some) people to do more than before and this probably is a significant jump in efficiency on a similar level as other things like: personal computers, the web, mobile phones. The AI might not benefit the experts (they are already quite good), but if they help "the average person" a bit, it's still a lot.
+2 / -0

6 months ago
I found chatgdp quite underwhelming as a writing tool. Sure, you can use it to produce generic texts like applications, maybe you can use it to explain tabellas in scientific literature etc. But working with it has a structural problem.
Task: Recreate the post of USrankAdminSteel_Blue above so that it says exactly the same thing, but without copying the original post into it and telling it to reformulate it. First, even if you did the reformulation method, you very likely get some gibber that is maybe correct, but not exactly what USrankAdminSteel_Blue said. And even if it was really good, then YOU HAVE ALREADY DONE the work.
If you start from zero, it takes 10 times longer to explain the bot what you want to express than just writing it yourself.
As long as you cant plug a cable in your brain, have the bot read your thoughts and write them down - hope your organized in your head tho - communicating with it is absolutely inefficient.

Someone else came to the same conclusion as me:
+1 / -0
6 months ago
Probably you are an expert. I have seen people using ChatGPT to ask a question because they had no clue what keywords to put in google to get what they were looking for (and it was not rocket science :-p). And their reaction was "wow I feel like superman now that I can do this" (and I was thinking "wow you were not able to do that ?!")
+2 / -0
I'm curious about whether anyone poasting in this thread has actually tried GPT-4

quote:
I have seen people using ChatGPT to ask a question because they had no clue what keywords to put in google to get what they were looking for

To be fair, a large piece of the problem here is that all the search engines have become crap through concerted efforts of SEO companies and also of search engine companies
+2 / -0
As I said earlier I am using LLMs and more specifically chatgpt 3.5 and 4 daily, but to develop a product not to ask questions for me.

For programming questions I find google more or less the same. For providing code templates is fine, but then I need to adjust them anyhow a lot to what I actually need, and for most stuff I know it well enough to be less effort to just write it myself rather than to explain in detail what it should do (same effort).

One thing I really notice: in quite some cases it is not stable. Ask a question 100 times, 80% is good, 20% very wrong. Have a slight difference (that should not make a difference) and you can get to 100% wrong (or to 100% good). This is a nightmare to explain to commercial people ("but yesterday it worked! What did you change?! Make it like yesterday!").

Edit: not to mention chatgpt4 is 10x more expensive than chatgpt3.5 and while it is definitely better, not sure it is even 5x better...
+2 / -0
Page of 2 (34 records)