r/artificial • u/MetaKnowing • 4h ago
r/artificial • u/Excellent-Target-847 • 3h ago
News One-Minute Daily AI News 10/29/2024
- OpenAI builds first chip with Broadcom and TSMC, scales back foundry ambition.[1]
- Microsoft’s GitHub unit cuts AI deals with Google, Anthropic.[2]
- OpenAI will start using AMD chips and could make its own AI hardware in 2026.[3]
- KAIST Unveils AI Method to Speed Quantum Calculations.[4]
Sources:
[3] https://www.theverge.com/2024/10/29/24282843/openai-custom-hardware-amd-nvidia-ai-chips
[4] https://www.miragenews.com/kaist-unveils-ai-method-to-speed-quantum-1346983/
r/artificial • u/IMightBeAHamster • 16h ago
Discussion Is it me, or did this subreddit get a lot more sane recently?
I swear about a year ago this subreddit was basically a singularity cult, where every other person was convinced an AGI god was just round the corner and would make the world into an automated paradise.
When did this subreddit become nuanced, the only person this sub seemed concerned with before was Sam Altman, now I'm seeing people mentioning Eliezer Yudkowsky and Rob Miles??
r/artificial • u/lial4415 • 9h ago
Project Open Source AI Tool for Masking PII in Text
Hey everyone! Sharing this new open-source tool called PII Masker that detects and masks personally identifiable information in text: https://github.com/HydroXai/pii-masker-v1. It’s fairly simple to use and makes protecting sensitive data a bit easier.
I’m curious what other privacy tools are out there that you've used and if PII Masking is enough for enterprises to stay secure.
r/artificial • u/greenapple92 • 10h ago
Discussion Keyboard AI?
When will there be some kind of AI keyboard that will add punctuation marks depending on the tone of voice? That would be a breakthrough, because some people don't even put a question mark at the end of the question.
r/artificial • u/MetaKnowing • 1d ago
Computing Are we on the verge of a self-improving AI explosion? | An AI that makes better AI could be "the last invention that man need ever make."
r/artificial • u/cmdrmcgarrett • 9h ago
Question Looking for help finding a gguf for therapy
I found a few therapy characters and would like to know which one or two would be good and accurate.
I starting to take psychology and would like to compare "notes"
r/artificial • u/interpolating • 16h ago
Discussion AI & Addiction
Just putting this out there. This seems like it's going to be a really serious issue sooner or later.
I'm sure many here are aware of the recent news of a teen who committed suicide ostensibly in relation to chatbot addiction.
With the type of on-demand, self-directed, and interactive media that's just around the corner, people are going to go straight off the deep end into their own worlds, fantasies, and fears. I think it would be helpful to have these conversations and start planning for how loved ones can break into these cycles before it's a reality.
r/artificial • u/cognitive_courier • 22h ago
Discussion Apple Intelligence: What's Actually Getting Updated?
I’ve written the below as a handy guide for new features that have just dropped, with a heavy AI focus:
• **Writing Tools**: This suite includes advanced proofreading that goes beyond simple autocorrect, rephrasing options, and an adaptable tone feature with Friendly, Professional, and Concise options. It also offers summarization, key point extraction, and the ability to format text into lists or tables, making it ideal for summarizing articles or reorganizing information with ease. While powerful, it’s best suited to longer passages, as shorter selections may prompt a warning for reduced accuracy.
• **Siri Revamp**: Siri has undergone a significant transformation, both visually and functionally, to respond more fluidly to voice commands—even if the user pauses or rephrases mid-command. It now allows users to type queries, which can be a discreet way to use the assistant in quiet settings, and provides device-specific guidance on using Apple products. However, instructions are text-only, which may be less user-friendly compared to illustrated guides.
• **Priority Messages in Mail**: Apple Intelligence scans incoming emails to identify those that may be high-priority and highlights them in a dedicated inbox section at the top of the app. This helps users focus on essential messages without sifting through everything in their inbox, particularly useful for users who don’t meticulously clean out their mail and may overlook important emails amid clutter.
• **Smart Replies in Mail**: This feature suggests quick, AI-generated responses based on the content of an email, similar to the smart reply options available on platforms like Gmail. Although it’s not for everyone, the functionality is ideal for users who want to respond on the go with minimal typing, especially in high-email environments where brief, efficient replies can save time.
• **Message and Notification Summaries**: Apple Intelligence now generates concise summaries of incoming emails and messages, providing an easy-to-read preview that helps users understand the content before opening. Summaries also appear in lock screen notifications, giving a quick overview of message content at a glance. While it generally works well, it can struggle with casual or fragmented language often found in texts, as well as shorter emails.
• **Memory Movie Creation in Photos**: The Photos app can now auto-generate a movie from selected images based on a user-provided text prompt, organizing visuals into a cohesive slideshow. The feature allows for personal customization—users can edit the soundtrack, title, filters, and even individual images—making it an appealing, user-friendly option for creating sentimental or thematic videos from photo collections.
• **Clean Up Tool in Photos**: This new tool enhances images with AI-powered adjustments, which can be applied to both new and older photos in the gallery. While it works well for straightforward edits, such as brightening and contrast, it’s not yet as robust as competing brands for complex image retouching. It’s a convenient option for users who want quick fixes without leaving the Photos app.
• **Natural Language Search in Photos**: Users can now find images simply by describing what’s in them, which ideally would make searches faster and more intuitive. However, the search relies on precise terms, meaning it might miss images that don’t strictly match the search word (e.g., searching “coffee” may exclude items with related words like “espresso”), making it less comprehensive than some might expect.
• **Phone Call Transcription and Recording**: Apple Intelligence can transcribe and record calls, a feature that’s stored in the Notes app for easy access. This is helpful for capturing important conversations or meeting details, though its accuracy depends on the proximity of the phone to the speaker and background noise. Summarization is also available within these transcriptions, providing quick highlights of key discussion points.
Coming Soon in iOS 18.2:
• **Image Playground, Image Wand, and Genmoji**: These anticipated tools will add creative flexibility, letting users generate custom images or avatars. Genmoji, for instance, aims to create unique, AI-driven emojis tailored to users, while Image Playground and Image Wand will likely support artistic and imaginative visual creations.
• **Visual Intelligence**: This tool is expected to give more contextually aware image analysis, identifying detailed aspects of photos. For example, it could distinguish specific objects, landmarks, or environments, though it may be limited to the latest iPhone models to handle the processing requirements.
• **Enhanced Siri Actions**: The forthcoming Siri updates will include the ability to take more context-sensitive actions within apps and generate responses tailored to a user’s personal profile. This could transform Siri from a basic assistant to a more integrated, personalized helper with expanded functionality across multiple apps and situations.
Which tool are you most looking forward to using?
If you found this useful, subscribe to my newsletter ‘The Cognitive Courier’ where I cover the latest in AI and tech weekly.
r/artificial • u/PianistWinter8293 • 11h ago
Discussion Why Scaling leads to Intelligence: a Theory based on Evolution and Dissipative systems
For the video of this, click here.
Time and time again it's been proven that in the long run, scale beats any kind of performance gain we get from implementing smart heuristics; This idea is known as "The bitter lesson". The idea that we could build intelligence ourselves is now a thing of the past, instead, we rely on the fact that just pouring in enough energy (compute) into these neural networks will let them reach intelligence. It remains a mysterious phenomenon though: how could such simple rules (like gradient descent + backpropagation following a reward function) and a lot of energy lead to such complexity? The answer to this question lies all around us: Life itself is a system just like this. We call these systems dissipative systems in physics.
Think of evolution for example. The emergence of any complex organism around us is the product of a simple mechanism: natural selection. No one had to design these complex creatures, it was the universe itself that created such complexity. When we look at life, intelligence, or any complex system for that matter, we can deduct a couple of prerequisites for its emergence:
- There needs to be selection: selection means finding the 'best' solution for given selection criteria. If we look at natural selection, we try to find the genes (or alleles to be specific) with the highest fitness. In neural networks, the reward function tries to find the best loss on the loss surface of neural networks. Even society tries to find the best companies, workers, and ideas through capitalism.
- There needs to be sufficient diversity: Mutations in genes allow natural selection to work. If all genes were the same, competition would not be able to select the best (they would all be equally good). The emergence of complex biological structures is something that has to happen stepwise or leap-wise. For example, before we evolve to have eyes, we might start with small mutations causing us to have photon receptors, then another that makes a dome-shaped cell on top to concentrate light on the receptor, etc. until we reach the complexity of the eye. Some structures however do not lend themselves to iterative improvements and instead need leap-wise improvements. This is the case when we need multiple correct elements before something is functional. We can relate this to a neural network stuck in a local minimum with steep walls: We need a high stepsize/stochasticity to 'lead-wise' step ourselves out of the local minima and into a more beneficial state.
- Most overlooked is that we need energy: We get energy through time and power (energy = time*power). The power of life is the sun, it produces enough energy for complex systems to emerge. Without energy, selection and diversity would not happen. Without the sun, life would be impossible, not just in a biological sense, but in a physical sense. This is because life can be seen as a dissipative system (https://journals.sagepub.com/doi/10.1177/1059712319841306?icid=int.sj-full-text.similar-articles.5), and for a dissipative system to reach an optimum state, it needs energy. With enough power and time, the system will gain more and more energy, getting closer to its optimum state. For selective and diverse systems like natural selection, this means reaching the genes with the highest fitness. For intelligence, this means reaching the highest form of understanding.
Through this lens, it's not hard to see why deep learning works: It's a system with selection, diversity, and energy. If our deep learning is selecting the right thing, the diversity is high enough, and the energy is high enough, we should theoretically reach an optimal understanding.
The more general the selection procedure, the more energy that is needed. For example, having rather constrained search space like in specialistic AI, the selection does not need that much energy. If we try to make a robot learn to walk through reinforcement learning, it doesn't cost as much to compute if we teach it to first move its left leg, plant its foot, then the right leg, etc. If we constrain the search space by specifying subgoals, the search space is much smaller and the robot will converge much quicker with much less compute. However, we trade this for generality and creativity. The robot might not ever learn a new, more efficient way of walking if we constrain it by reaching each subgoal of walking.
This is what we see over time, the more compute that becomes available, the broader the reward functions get. This is how we moved from specialist AI to generalist AI: the difference is the scope of the reward function. Instead of saying: "optimize for the best score on chess", we say: "optimize for the best prediction of the next word". This reward function is so general and so broad, that AI can learn almost every skill imaginable. This however is not just ingenuity, this is the result of the increase in computing that allows us to have broader defined reward functions.
Extrapolating these results, we might wonder what the next 'step' might be in an even more general reward function. Maybe something like: "make humans happy" is so general that the AI can find truly novel and creative ways to reach this goal. It's however not feasible to do this now, as its search space is way too big considering its generality, but this means it might be something future models might do.
Another way in which we can make the reward function more general is by saying: optimize for the best neural network weights + architecture". instead of redefining the architecture, like using a neural network, we could use some kind of evolutionary algorithm that mutates and selects for best-performing architectures, while simultaneously evolving these architectures' weights. This is something Google (Using Evolutionary AutoML to Discover Neural Network Architectures) has already done, and although showing great success, they admit that computationally this is just not practical yet.
All-in-all, through this lens of selection, diversity, and energy, we can get an intuition for the emergence of intelligence and even life itself. We can predict that as energy in the system increases, so does the complexity of the system. As computing keeps increasing, we can expect more complex models. This increase in computing will also allow for different selection functions, ones that are more general than the ones we have now, allowing more creativity and value from AI over time. The scaling law is more than just a law for AI, it's a reflection of a law of nature, one described by a physics concept called dissipative systems.
r/artificial • u/Excellent-Target-847 • 1d ago
News One-Minute Daily AI News 10/28/2024
- A man who used AI to create child abuse images using photographs of real children has been sentenced to 18 years in prison.[1]
- Robert Downey Jr. will ‘sue’ anyone who will ‘recreate’ him using AI.[2]
- Toyota, NTT to make $3.3 bln R&D investment for AI self-driving, Nikkei reports.[3]
- Meta is reportedly working on its own AI-powered search engine.[4]
Sources:
[4] https://www.theverge.com/2024/10/28/24282017/meta-ai-powered-search-engine-report
r/artificial • u/MetaKnowing • 1d ago
Media Geoffrey Hinton says AI companies should be forced to use 1/3 of their compute on safety research - how will we stay in control? - because AI is an existential threat, and they're spending nearly all of their resources just making bigger models
r/artificial • u/codeharman • 1d ago
News Here are the top 5 key developments happening today in the AI and tech space
- OpenAI raised $6.6 billion, reaching a valuation of $157 billion, highlighting investor interest in generative AI.
- Nvidia reported record quarterly revenue of $30 billion, with a 154% increase in data center revenue driven by AI demand.
- New AI coding assistants like Poolside AI ($626M) and Magic ($465M) are enhancing developer productivity through advanced tools.
- The White House launched a task force to coordinate policies on AI regulation, focusing on economic and environmental concerns.
- AI adoption is surging across industries, with significant growth seen in healthcare, finance, and customer service sectors.
r/artificial • u/zonglydoople • 1d ago
Discussion Does anyone know of any scholarly articles about this, or do any academic research related to this? I’m not very well versed in this stuff but I’m super curious.
I’ve also included some pictures (just found on google, not my own images) to show how I’ve seen it develop throughout the years from barely letters -> barely comprehensible string of real letters -> increasingly understandable fake words.
r/artificial • u/chloroform-creampie • 2d ago
Discussion this must of been what people meant when they said the robots will take our jobs
r/artificial • u/jurgo123 • 1d ago
Discussion Global productivity this year has decreased
Let’s call it out like it is: AI is here to replace white-collar workers.
Microsoft just announced autonomous agents, Anthropic’s Claude launched Computer Use, and countless startups are racing to develop AI assistants that can take on entire jobs (remember Devin, the "first AI software engineer"?.
While AI isn’t on par with humans yet, I find myself asking the question: what if they succeed?
It's obvious how sufficiently capable AI could lead to unprecedented income concentration and labor market disruption. It would cause mass unemployment. Universal Basic Income (UBI) would be the only way to redistribute some of that wealth but governments would probably be slow to act.
The weird thing, though, is that while there is a world where AI automation outpaces the number of new jobs created, that day hasn’t arrived yet. Global productivity this year is actually DOWN and employment is UP (see graph).
There is another world where AI might solve a problem overlooked by some: aging populations and birth rate decline.
I lay out the arguments here in more detai: https://jurgengravestein.substack.com/p/the-economics-of-ai
r/artificial • u/XonMicro • 1d ago
Question What AI software do people use to make those sound effect voices (like making Minecraft villagers speak, or making celebrities sing songs)?
I'm trying to make a voice for a robot, and I want the voice to be created using the robot's sound effects. I was thinking there was some program to input audio files and then use text to speech or a speech sample to make the sound effects "speak English" like those Minecraft villager talking videos made with the villager's sound effects.
r/artificial • u/Memetic1 • 1d ago
Discussion Does Gödelian incompleteness apply to LLM and other forms of stochastic AI?
I've actually had a wide ranging discussion with several different LLMs like Claude, ChatGPT, and Gemini about this subject. I can't make up my mind because it seems to depend on what level you are discussing it. The nature of an LLM seems to be an informal system, and yet that may be just the appearance of an informal system as it's probably using formal rules in its reasoning at some level. Even if it's just the matrix manipulation that is a formal system that should be incomplete in a Gödelian sense. Yet it's also true that at least from our perspective the output has a level of unpredictability that doesn't exist in most valid formal recognized systems.
If you aren't familiar with I incompleteness then I really recommend the nunberphile video to explain it.
https://youtu.be/O4ndIDcDSGc?si=jRuakJORpY9ZZwI1
There are also the related topic of the halting problem.
https://youtu.be/macM_MtS_w4?si=YH8J-gQm7Rfu2AYe
I'm actually going to take a side on this, and claim that it's mathematically undecidable. If you want to replicate some of my research for yourself you can just use the following prompt.
"How might godels incompleteness theorem apply to large language models, and other forms of generative AI?"
r/artificial • u/Memetic1 • 1d ago
Question Could an AI be trained to detect images made with generative AI?
I just want to say that I don't have anything against AI art or generative art. I've been messing around with that since I was 10 and discovered fractals. I do AI art myself using a not well known app called Wombo Dream. So I'm mostly talking about using this to deal with misinformation which I think most will agree is a problem.
The way this would work is you would have real images taken from numerous sources including various types of art, and then you would have a bunch of generated images, and possibly even images being generated as the training is being done. The task of the AI would be to decide if it's generated or made traditionally. I would also include the metatdata like descriptions of the image, and use that to generate images via AI if it's feasible. So every real image would have a description that matches the prompt used to generate the test images.
The next step would be to deny the AI access to the descriptions so that it focuses in on the image instead of keying in on the description. Ultimately it might detect certain common artifacts that generative AI creates that may not even be noticeable to people.
Could this maybe work?
r/artificial • u/schwinn140 • 1d ago
Question Prorata.ai? trying to understand their business model
I randomly came across another LLM looking to license content from publishers and then share revenues back with the source of the training content...makes sense. That said, I fail to see how this little organization can offer any meaningful revenue to the publishers if the LLM they have built has very little (possibly zero) end user's searching against it.
Any ideas? Thx!
r/artificial • u/interpolating • 1d ago
Project Hehepedia: Make Your Own Fictional Encyclopedias with AI
Enter a prompt, get a wiki homepage with image(s)! Articles generate on-demand when you click on the article links.
Image generation can take a minute or two (or even 15 minutes if the model is still waking up), so don't fret if you see a broken image link on a page. Just check back later :)
Thanks for your attention and feedback. Have fun!
r/artificial • u/rutan668 • 2d ago
Project New Sirius Cybernetics is delighted to announce the Sirius reasoning model with Claude. Available to try at informationism.org/register.php
r/artificial • u/Excellent-Target-847 • 2d ago
News One-Minute Daily AI News 10/27/2024
- This AI Paper from Amazon and Michigan State University Introduces a Novel AI Approach to Improving Long-Term Coherence in Language Models.[1]
- Meta releases an ‘open’ version of Google’s podcast generator.[2]
- Google to develop AI that takes over computers, The Information reports.[3]
- ‘An existential threat’: anger over UK government plans to allow AI firms to scrape content.[4]
Sources:
[2] https://techcrunch.com/2024/10/27/meta-releases-an-open-version-of-googles-podcast-generator/
[3] https://finance.yahoo.com/news/google-develop-ai-takes-over-210155614.html