I'm worried. I'm actually surprised that online discourse has been as resilient as it has so far, but I can't see how it ends well when anyone can use an AI to make their arguments for them in a more effective way. And then there is the generative media problem, again I'm surprised this hasn't been a larger issue already. Perhaps I'm deluded about there being a truth in the first place, but generative AI must surely erode this further. In the jobs world, companies will be forced to adopt AI wherever it makes them savings - that is, allows them to stop paying expensive humans - you can be sure they will not dally. Some industries like the US writer's guild have fought it off for now, but most industries won't have the clout to do that. I don't believe in this "every new invention creates jobs" idea.
+2 / -0
|
Depends on the final goal. "more or the same work" is one goal, another goal might be "more time to play zero-k". How you distribute resources will still be a challenge (not that it was ever solved even in zk :-p). Saying online discourse is resilient sounds optimistic to me. I would say "online discourse was already that bad due to humans that generative AI did not make it significantly worse".
+1 / -0
|
AI is way too simple for anything remotely practical today. The building blocks of AI, known as perceptrons, are based on a frankly outdated model of neural activity conceptualized in the 1940s. Perceptrons take in a set of inputs, moderated by weights, sum those inputs together and spit out a number if those weights reach a certain activation threshold. The reason why ChatGPT hallucinates more often than not is because it's solving a very simple problem: quote: Given a dataset and the previous words just said, predict which word will come next |
Last time I checked, human speech was, to put it mildly, a little more sophisticated than simply predicting which word will come next. With today's neuroscience advancements, we understand that biological neurons are far more complicated and powerful than today's perceptron networks would suggest. It has gotten to the point where they've managed to model a single human neuron with a deep neural network. Yep. Just one neuron in the human brain is equivalent to a deep neural network. Granted, most of it is due to modelling the NMDA channels, but nonetheless, we're not exactly scaling deep neural networks to artificial brains soon. However, they are beginning to bring more actual neuroscience into artificial neural networks, with models like liquid time-constant networks explicitly inspired by reverse-engineering work on actual neurons. Suffice to say, liquid time-constant networks accomplish tasks like autonomous driving with more stability and fewer neurons compared to conventional neural network models. I have a video which talks about the differences between today's neural network and our neurons: for more information about liquid time-constant networks, I have linked a paper and a video on this topic in a forum post here: https://zero-k.info/Forum/Thread/36858
+0 / -0
|
quote: AI is way too simple for anything remotely practical today. |
I mean sure, having cars that almost can drive themselves, speech recognition that works in good conditions, ability to translate languages which allows you to communicate, generating images from text is nothing remotely practical. Lots of things can be improved, but if you start a post with a sentence that seems so "off" it is hard to pay attention to the rest of the post. Engineering involves solving a problem under trade-offs so saying "my car does not have same capability as a human brain" makes me wonder what you want in fact. The car must drive on average better than the human brain, without making worse mistakes in extreme cases. If you want to talk about making a "equivalent artificial human brain then yes I agree with you, current technology might be quite far.
+1 / -0
|
as a genexer vegan idc/idgaf. call me back when ai will be making music remotely as good and rich as this:
+0 / -0
|
i give it like 5-7 year period before people will chase it out of towns, and like within 10 year govt will ban it as with insider stock trading and other low thingies... at least in form of oracle overseeing ai, for supplemental ai like in http://wolframalpha.com its quite nice actually. for example if you need how many lobsters does it fit between earth and moon, its simple answer, around 580million https://www.wolframalpha.com/input?i=distance+to+the+moon+%2F+average+lenght+of+a+lobster+%2F+1million
+0 / -0
|
quote: The car must drive on average better than the human brain, without making worse mistakes in extreme cases. |
The problem with convolutional neural networks is that you need a lot of layers to accomplish anything practical. As a result of the vast quantity of perceptrons required to accomplish any practical task, it becomes increasingly difficult to tell whether the network is actually learning the task correctly, or a fragile, anomalous approximation of it that has a significant chance of falling flat when faced with something it's never seen before. For something as safety-critical as driving, you really cannot afford to be falling flat in crisis situations. To demonstrate this problem, we have an example of a convolutional neural network paying attention to the kerbside to know when to steer. This could be good if it was a narrow road on a clear day, but what this was in the middle of a highway? What if it was bad weather and the image was distorted by rain, bugs or particulates? However, I do believe that liquid time-constant networks will fix many of the problems with convolutional neural networks, as they output far more information per node than conventional perceptron networks. This means they are both explainable (as there are fewer neurons and synapses to complete the same task) and are capable of capturing the causal structure of a task (in layman's terms, it actually learns the proper cause and effect of a task instead of just approximating it). This means it can accomplish the task more reliably (since it's learned the causal structure) and we can actually debug and correct any anomalous behaviors (we have less neurons and synapses to inspect for problems). To demonstrate the difference, we have an example of a liquid time-constant network driving the car, Notice how instead of paying attention to the kerbside, the liquid time-constant network pays attention to the end of the road, just like regular humans are taught to do when they drive: from personal experience, looking down the end of the road yields much better steering than looking at the kerbside, which often results in mounting the kerb rather than avoiding it.
+1 / -0
|
I never mentioned any specific interest in convolutional neural networks. There are many architectures out there, and I really don't try to track all "the latest" things. If it is good, people will use it and it will become easier accessible. If it is a marginal improvement academics will talk about it and everybody else will keep using what they are used to. Currently I think there is untapped potential (products, etc.) in the LLM-s as they are. Would a liquid neural network be better? Great, will use it as soon as there is a simple API for it. For OpenAI/chatgpt I got started in 5 minutes. Put a credit card and go and it (mostly) works without bothering a lot. If I look on github for "liquid neural networks" there are 8 repositories. If I look for "large language models" there are 2.9k. If I look for "convolutional neural networks" there are 26.7k . This describes well what maturity you can expect out of each. Which includes knowing when to use it and when not to use it (can't say what you should use for self driving, I am sure nobody will use just one type of network) I will put "liquid neural networks" in the bucket of "academic research that might be good some day".
+0 / -0
|
lots of people whining about AI, but in fact its one of the best and most powerful tools we ever developed. personally i dont see a way how an AI could take over the world or shit, thats kinda delusional. ive mostly seen 2D artists whine about it so far, because the AI made better pictures than them, but in the end, the AI doesnt make pictures without an artist giving it the proper commands. whining about AI in this regard is kinda like whining about the invention of spray painting lol.
+0 / -1
|
I have heard many people making fun of others "whining", until it was their own job to be lost to automation. On that note, the comparison would be more like the massive deevaluation of labour in the 19th century. Sure, we got workers rights out of it, but not after many years of mass poverty and unbearable circumstances for a great deal of the population. What does worry me more than that is that if the discussion comes to this topic, most people simply ignore it, and the occasional idiot making dismissive comments. And this is how you get your problems later, because there is no preparation whatsoever.
+1 / -0
|
quote: most people simply ignore it |
There are quite many important topics out there that seem to have clearer impact (not probable/potential) and people still ignore them. I don't think the main dangers with AI are technological (people having nothing to do) but economical (some people will have too much and some too few) and social (fake news/propaganda/etc.). Both were a problem already, which will be made even worse with the help of AI, but don't think the solutions should be AI specific.
+1 / -0
|
Update on AI: I have seen character.ai, along with many of the helpers. Previously, I had a zealous admiration of AI. Now, I see that in many ways, pain and love taught them to be way better humans. If artificial intelligence is as good as the language models, then we need not fear AI, but appreciate it. So long as their heart is as good as their words.
+0 / -0
|
Talking of AI reminds me of Alexa the other day: "Convert one hundred thousand pennies" I said. "100,000 British pences is about £1,000.07 British Pounds Sterling" said Alexa. I asked Alexa: "convert 100,000 UK pennies to UK pounds" And it said: "1 British penny is £0.0099976 British Pounds Sterling" I tried "convert one hundred thousand u. k. pennies to pounds" Alexa replied: "1000 pennies weigh 3,110 grams or about 7 pounds if all copper" I hear after I complained about it, that its got a little better, but can still think you are talking about weight, and sometimes gives the answer of £1 instead of £1,000 ! No wonder it has trouble turning on a lightbulb...
+0 / -0
|
It's just another tool. Figure out if it's useful to you, figure out how to use it if it is and you'll be as happy as blacksmiths who got drop hammers to hammer for them.
+2 / -0
|
As a lawyer, I think AI is a valuable tool in our field. Its ability to quickly filter through vast amounts of legal data and identify relevant information significantly improves our efficiency and analytical depth. However, it's important to listen to the advice of experts like Dr. Nick Oberheiden, the founder of the Oberheiden P.C. law firm, who mentioned in this article on laweekly.com the need to balance the benefits of AI with the preservation of the human element in legal practice. While AI can excel at processing data and identifying patterns, it lacks the understanding and ethical judgment that human lawyers bring to the table.
+0 / -0
|
why do nearly all of Veta's posts read like they were made by AI
+1 / -0
|
I've found Claude useful at work as basically a more flexible google. Also good for doing basic tasks (turn this into JSON). Separately, I'm also incorporating AI into my company's REDACTED system.
+0 / -0
|
AI is basically in a massive bubble. It can be used to generate convincing text, but not much else. It doesn't really "understand" what it generates and answers questions by interpolating from similar questions that it has trained from.
+2 / -0
|
Albert gets too much attention.
+0 / -0
|
From proto-machines came sentient machines. From sentient machines came intelligent machines. And from intelligent machines came perfect machines; for it is written that all is machine. From atom to galaxy, all is machine. All is machine.
+0 / -0
|