
Governor Newsom’s Decision on California AI Safety Bill SB 1047
Yep, regulators are coming for AI. But is this really the right bill? Should AI companies be responsible for how people use their tools? This

Yep, regulators are coming for AI. But is this really the right bill? Should AI companies be responsible for how people use their tools? This

Elon Musk, in his typical money’s-no-object fashion, continues his power moves through the world of AI. The new training cluster being brought online by his

This is a fascinating new development, indicating that AI is beginning to overcome its limitations in regards to making autonomous financial transactions. Some (very savvy)

AI usage is not slowing down any time soon. Especially as people realize that businesses that strategically adopt AI will replace those that don’t. I

This is the kind of AI news I absolutely love to read about. AI sometimes gets negative press—and sometimes for good reasons!—but the sheer amount

Recently, I noticed that some of my automations requesting ChatGPT to summarize articles by providing the link we’re incredibly inaccurate. However, they were also incredibly believable which is the concerning part. I started noticing names and organizations that I didn’t recognize from the article and sure enough it had completely made things up. What’s worse is that it was totally unpredictable. Sometimes it worked great. So I had to rework some existing automations to fix the issue. I am really looking forward to a more reliable iteration of OpenAI’s flagship LLM. maybe Project strawberry will be the one! In the meantime, be warned…

Watermarking AI content is a great idea in theory. But would it work? I imagine it could for video and imagery, maybe even audio. But what if it’s made with the myriad of open source tools available!? And for text? For now it’s often easy enough to tell when people aren’t using very sophisticated prompting for writing with AI, but it’s getting increasingly hard to tell. And I have no idea how passing it through a filter or two wouldn’t be able to spoof it… This will be an interesting space to watch!

Companies need to start implementing into their processes ASAP, lest they fall behind those that are doing so. Many companies are just blindly giving employees subscriptions to services like ChatGPT without really giving them any real training or effective use cases. Here’s an example from Amazon, one company that seems to be doing it right.

Accuracy in AI is one of the most widely cited concerns of the technology. But as Sam Altman from OpenAI says “the tools you’re using now are the worst ones you’re ever going to use.”
In other words, things are getting a lot better. However it’s going to definitely take real solid progress to win back some of the distrust that’s been established from some high profile cases where things have gone (very) awry!
Well, this may be a good step in the right direction…

The Lex Friedman podcast with the Neuralink team is the longest podcast I’ve ever listened to but incredibly good! So amazing what their team is