AI Lens

Episode 4: The New Era of AI Coding

AI Research Technologies, Inc. Season 1 Episode 4

Send us a text

AI LensYour Focused view on the emerging hot topics in the Age of AI!! We provide AI news, hot topics, advancements and discussions about how AI is reshaping business and society.

Season 1 Episode 4:

Today we’re diving into one of the most talked-about stories in software development as 2026 begins: Claude’s emergent coding capabilities and the newest claims on Instagram that its latest code commits were written entirely by Claude itself — sparking debate across developer forums, social media, and tech news about vibe coding. 

From senior engineers weighing in, to open-source tools amplifying Claude’s reach, this could signal a real shift in how software gets made.

This episode is not just news —
 It’s real world AI advancement

SPEAKER_00:

You're listening to AI Land. You're focused with you on the emerging hot topics in the age of AI. We provide AI news, hot topics, advancements, and discussions about how AI is reshaping business and society.

SPEAKER_01:

And today we're diving into one of the most talked-about stories in software development as 2026 begins. Claude's emergent coding capabilities and the newest claims that its latest code commits, pulls, and tens of thousands of lines of code were all written without human additions. So Cloud itself is sparking debate now across developer forums, social media, tech news, from senior engineers weighing in to open source tools amplifying cloud's reach. This could signal a real shift in how software gets made.

SPEAKER_00:

This episode is not just news, it is the real world AI advancement. So over the last 48 hours, we've seen extraordinary development in the world of AI-assisted coding. And that's what today's episode is all about.

SPEAKER_01:

Today we'll cover a headline grabbing developer claim that Claude wrote 100% of his recent code contributions, a viral ex post from a senior Google software engineer about rapid system creation by an AI coding tool that Claude built in an hour, what previously took her team one year to complete, open source tools like AutoClaude and the Ralph plugin expanding Claude's practical power and what all of this might mean for workflows, productivity, and the future of development itself.

SPEAKER_00:

Right. Before we jump into the longer narrative, let's start with the most recent headlines circulating on the developer platforms today. Today is January 4th, 2026, and we had a couple of them hit. A post on a popular software discussion forum suggested that the creator of Claude Code recently confirmed that for roughly the past 30 days, a hundred percent of his contributions to the project have been written by Claude itself, meaning he reviewed and guided it, but did not type a line of code himself. This was posted on Instagram. So let's be clear about two things. One, this claim originated from a social media post, a forum post. It's not an official anthropic press release. The distinction, number two, the distinction between clawed generating code and clawed autonomously driving the full development of workflow is significant. Okay, still this claim, credible or not. Remember, every time you read something on social media, you have to you need to hear it from the official source. Oh, yeah. Credible or not, this has become a story in its own right because developers, analysts, and influencers are talking about this in real time. And it amplifies a central question we'll explore today. Is AI now doing more than just assisting? Is it actually building software by itself?

SPEAKER_01:

And before we get into all the implications of that last statement, let's first talk about the social media moment that started it all. So again, today, Jana, I hope I'm not butchering her name, but Jaina Dogen, a principal engineer at Google, posted on X about a personal experiment using Claude Code. She said that it generated a distributed agent orchestration system in approximately an hour, work her team had previously spent extensive time on. I believe she claimed her team spent about a year on it. And in her words, quote, I'm not joking, and this isn't funny, unquote. This was a personal experiment, not an official benchmark, but it rapidly spread across X, Reddit, and technical forms, as you could imagine. Soon after, mainstream coverage followed, and different coverage focused on things like, you know, engineering shock to calls for skeptics to try their own AI agents. But this mattered moment, this excuse me, this moment mattered because it really was a public admittance by a senior engineer that an external AI tool compressed development time dramatically.

SPEAKER_00:

And if you've been listening to our podcast, you know we've been talking about that. This is the year where coding is really gonna get going so that things can you can just you don't need an engineering degree to code anymore. It is pretty much plug and play, uh sending in prompts. So there's also a lot of buzz that's been going on in the last week. Um, the tools that are being used to code. Claude's cultural momentum is actually um amplified by complementary tools. One is called AutoClaude, that's open source autonomy. AutoClawed is um an open source project from developer Andy Mickelson. He's not an official, this wasn't an official anthropic product. Um, per late December reporting, autoclawed can plan tasks, generate code, handle merge conflicts, sync to GitHub, and produce project roadmaps, all while adapting to developers' workflows. Developers on GitHub and social media platforms have shared early impressions, praising the ability of auto uh Claude to do tedious, to automate the tedious tasks while leaving humans in the driver's seat.

SPEAKER_01:

The Ralph plugin, uh another important um uh item is a an iterative uh precision tool. So basically, what the Ralph plugin does is it enables uh Claude to iteratively refine its outputs until they meet like very strict standards. That iterative refinement turns base outputs into something that is very close, if not complete, production quality.

SPEAKER_00:

So basically, it will loop and refine and refine and refine until it meets whatever standards you give it.

SPEAKER_01:

Yep, correct.

SPEAKER_00:

Wow. All right, and there's been a lot of influencer and analysis going on, uh, a lot of commentary about this stuff. Analysis from global data has noted developers and influencers are now praising cloud's coding uh capacity to automate complex workflows. It's really reflecting a broader shift in how AI tools are used. The December 2025 updates, uh several new um AI news roundups at the end of the year described Claude's code uh latest updates as groundbreaking, particularly in terms of autonomy and workflow improvements. It's creating a fertile context for the current conversations.

SPEAKER_01:

So Claude has distanced itself as you know, really the best option for coding. And, you know, it might it's worth our time really go into why it's become the developer go-to. So so, in terms of what's driving that interest, speed number one, cloud can significantly compress development time on repeatable tasks. Second item is integration. Tools like AutoCloud fit into existing workflows like GitHub, iterative refinement again. This sounded a little repetitive, but plugins like Ralph ensure quality, social proof, public claims and influencer commentary are shaping perception, some of the stories we started with, and momentum, continuous updates over the last year have really set the stage. And taken together, I think these elements are nudging developers toward AI first and in some cases AI heavy workflows that soon will just be AI-centric and AI complete workflows.

SPEAKER_00:

Oh, sure. It's gonna be AI coding that's managed by humans, is what it's like. All right. So let's talk about what AI exclusive coding looks like today. Based on the examples that have been shared in the different forums and projects, right now what we're looking at is that prompts will actually be replacing manual typing code. A lot of the different um different coders are reporting that they'll start with AI, and at least 80% will be coded by AI, and then they do the last 20% to refine it. Now, with a plug-in like Ralph, you know, that could even be less time coding. Claude generates modules or entire components. It also, the automation manages repository interactions. Like I said, those refinement loops improve results, it improves the quality of the code. And really, right now, humans are acting more like the architects or reviewers. The recent social posts on Instagram claiming that Claude wrote a hundred percent of the creator's code for the period. Um, it just reinforces this narrative. It's very much a part of a broader conversation, less than a fur uh formal confirmation. Like I said, you we have to wait to hear directly from uh from Artifact saying that that's what happened.

SPEAKER_01:

Well, with that said, I'll just jump in here, Liz. With that said, though, it does um, I'm assuming is probably accurate. And let's let's just take that assumption to be a fact. And it does end up highlighting the fact that there are some risk trade-offs, and what is that human role going forward, right? So here we've got we've got code that basically, or sorry, software is basically writing its own code, and there's some real good real world concerns about that. But um, you know, some some specific concerns that come to mind, you know, we have this over-reliance on AI risk. Um, you know, are we in fact just going to become completely reliant on AI and that that we don't have that uh necessary um oversight and and personal um intelligence because of that, um, which leads to another concern, this erosion of deep expertise. And there's all these security compliance risks, some of which we highlighted in the last podcast, uh, does lead to vendor lock-in. That's another real risk. And I think most importantly, there's these ethical implications, such as responsibility for AI-produced code. Um again, we're you know, but uh ultimately though, I think it's safe to say, and you tell me if you agree, do the potential benefits, faster development, reduced costs, broader access to complex engineering, are they as significant or not more so that that they offset those risks?

SPEAKER_00:

Um let's talk a little bit about the risks. The ones that I would really be concerned with are that humans don't even need to know code anymore. You know, to develop software, it's gonna be plug and play. So we do need to have the guardrails, like we spoke about before, um, to make sure that at least the manager knows how to code. Um because everyday people are gonna be able to code now. Um and AI is gonna be coding on its own. The other um issue that I wanted to bring up is the vendor lock-in. Um once you start developing code using something like Ralph, it may be hard to get away from that plugin. Um, that's gonna have to come with competitors and to see if other companies are gonna arise, you know, rise up to do that. I think the natural form of technology, what's going on? I mean, how many LLMs do we have now that people use every day? I think it's very likely that the vendor lock-in um won't really be that big of a risk because of the natural capitalism and competitivism that we have in this in this field. With technology, I think we're we're gonna find people making better and better software. Um, but the ethical implications, we're gonna have to come up with some sort of rules about what kind of guardrails to use and how to make sure that it's gonna meet quality standards.

SPEAKER_01:

I think the big concern for me is the runaway train scenario. Here you've got uh basically an intelligent software being able to write its own code. Are we gonna have um a sufficient understanding of what it takes to put up those proper guardrails so we can contain it and control it, or at least guide it in a way that it's um, you know, transparent and so that we understand what exactly it's doing and not nefarious because it could be very influential rather quickly in a way that we might not fully grasp or understand, and could get to a point where we can't even control it, and then you know, then we're at the mercy of it in some in some respect. So there's those key concerns. I suspect we'll figure it out, but but the key is gonna be balance, maintaining that human oversight and judgment.

SPEAKER_00:

Well, and what's the most likely way that people are gonna develop guard roll rails? They'll do a prompt. What kind of guardrails do we need, right?

SPEAKER_01:

You know, well, hopefully, hopefully they're thinking outside the systems.

SPEAKER_00:

Well, I'm sure they are, but that's the the shortcut to brainstorming now.

SPEAKER_01:

Well, that's that's another concern, right? Because we're gonna become super AI reliant on even thinking, right? It's gonna think for us.

SPEAKER_00:

So because that's what I thought of when we were talking about guardrails. I'm like, yeah, let's develop guardrails. Let's go on the different AIs and brainstorm, right?

SPEAKER_01:

Hopefully it's not smart enough now to uh or nefarious enough to do something um negative and then dupe us, I guess.

SPEAKER_00:

It's all how it's trained, you know. It really is the whole thing. It needs to be trained correctly so that it's not gonna look for loopholes to the guardrails.

SPEAKER_01:

Um yeah, but and with that said, you know, I think uh, you know, we were we were talking about this before the podcast, and and what's making um AI truly able to do these sorts of things is there's been a shift in the way LLMs and these platforms are created. So it originally started. I'm gonna get in the weeds a little bit um for those that are interested, and I'll try and keep it as succinct as possible for those that aren't. But basically, you know, ChatGPT, for instance, was originally formed as a series of what we've heard of neural networks. So each neuron, if you will, was a repository of information. And so what happened was they would start, like let's say a few billion neurons, if you will. And then the ability to grow, make these systems more powerful, they just added more neurons. And so let's say it went from 8 billion to now 80 billion. Then the next step would be to take it to 800 billion. Well, the problem with that is you had to fire every single neuron with every question to get it to answer in a deeper, more profound way. And so what happened was these systems became slower and much more expensive to run and develop and train. And so that ended up being a little prohibitive. Well, what happened was some researchers at DeepSeq realized, hey, actually, we think OpenAI did, but they weren't, they weren't open sourcing what they were doing. But DeepSeq did. They did open source it, so we're able to see that they created these mixture of experts, if you will, MOE, or hear that term as you get deeper into AI. And what that consisted of was you do a prompt, there was a router of sorts that sat in front of these different experts. And so what you could do is more of a distributed um uh assignment that said, okay, hey, these experts will answer this question, those experts will answer that question. What I found fascinating was the way they trained the systems to do that is they kind of gave it some guidance, and the systems created these experts themselves. And what happens is now you're not relying on all of the neural network, now you're just relying on a portion of it. So what happens is you're getting better results faster and less expensively. Now, the key key way to do that is through hardware, the NVIDIA's GPUs, and some of the things they've done have enabled us to really accelerate, you know, um AI's ability to do this. But it's just fascinating how that ended up leading to a whole nother level of growth. So it'd be like taking Albert Einstein and saying, hey, we're gonna train you on every single thing when it comes to human knowledge, as opposed to, hey, we're taking Albert Einstein, have him specialize in physics, physics, science-related items, do somebody else in healthcare or somebody else in some other field, right? And that's basically what the pattern is, and that's how they're able to get these results. So sorry to geek out on everybody there, but I just found that fascinating.

SPEAKER_00:

Got it. No, it's helpful because a lot of people don't understand the background.

SPEAKER_01:

We have a really strong understanding of where this is going and how we can control it, and and that we can benefit from this in ways that um that are, you know, what we all anticipate when it comes to AI, you know. But it seems like, Liz, every day there's some new crazy update in the world of AI. And and as we talked about last podcast. Here we are about to head off to CS. And I can only imagine what sort of things we're going to discover there. I'm really excited about that.

SPEAKER_00:

Right. We are going to be there the sixth through the ninth. So tune in. We'll be bringing you updates on what we see. You know, um, there's just so much going on. I was I was talking to somebody today about a pivot in their business, and we were talking about switching to more of a robotic type business. And she said, I don't know anything about robotics. And I'm like, in today's world, you don't need to. You you can it can be figured out. And we don't need to have engineers to do AI, to do coding in AI. It's today, it's in the news, it's here. Um, you know, it's an it's a whole new world, and I'm sure the updates we see at CES are going to be mind-blowing. Um, just really mind blowing. But anyway, what we've seen over the last few days, these heated discussions, development, uh, developer claims, and the rapid social amplification, it underscores how quickly this ecosystem is evolving. AI-generated code isn't it's no longer an abstraction. It's happening today, it's happening now, and it's changing mindsets about what coding actually means. Whether Claude truly wrote all of someone's code or not, the fact is the belief that it can is itself a cultural milestone. And here it is, January 4th, 2026. We're going into the year with AI autonomously coding, um, just given a project and coding it. Unbelievable.

SPEAKER_01:

So you raised you raised some interesting points, great points. And it leads me to think that, you know, how much this is gonna change society. So, for instance, you know, we were siloed. If you wanted something developed with software, you had to go to a specific person that had that domain expertise. And now that's changing. And because that's changing and it democratizes development, like, how is that gonna change society, right? I mean, who's gonna benefit from that? Who's gonna take advantage of it? And it's not gonna be the usual pathway. So we're gonna discover new pathways and new ways of exploring and experimenting and developing and creating. And so it's just fascinating to see how this is all gonna evolve over just even the next few years, let alone the next 10 years. Got 10 years seems like a hundred years right now, um, in in prior ways of thinking. So it's just it's happening at such light speed. And uh, I'm just fascinated by what's gonna happen. Just this next year alone will be fascinating.

SPEAKER_00:

It's gonna be amazing. Um, and right now, the way I see it, if it gives so many more business opportunities, like you said, yes, it's always been siloed. Democracize democracizing. Am I saying that right? I'm not saying that right. Making it easier for people to code is going to be a game changer in the business world. Absolutely. Yep. I mean, I've I remember spending 10 grand on a website, which took them three months to do.

SPEAKER_01:

Lots of reiterations and iterations and all that. Right. Lots of repetitions.

SPEAKER_00:

Oh, yeah, yeah. And it I mean, now you can do it in five minutes. It's unbelievable.

SPEAKER_01:

And another two minutes for the content.

SPEAKER_00:

Right. Right. You just review the content and edit it and make sure and put your touch to it.

SPEAKER_01:

That's right.

SPEAKER_00:

It's crazy, and it's it it costs hardly anything. Um, yeah, it's a it's really fabulous, the business opportunities out there.

SPEAKER_01:

And that'll be a topic of a future podcast as well, you know. Um, because again, there's so many different angles just on this on this topic alone about what it can mean for society on a go-forward basis, and what it can mean for you and your business as well, or you as an individual contributor as well.

SPEAKER_00:

So exactly. Well, let's wrap it up. Until next time, thanks for listening to AI Lens. It's your focus view on the emerging hot topics in the age of AI. We provide AI news, hot topics, advancements, and discussions about how AI is reshaping business and society. If this episode about the advancements in AI coding made you curious or skeptical, make sure to follow, subscribe, and share this episode with someone who is curious about where AI is headed next. Until next time, stay curious, stay informed, and keep your lens focused on the future.