Hide table of contents
1 min read 15

6

Could an artificial general intelligence (AGI) craft computer code and open up possibilities never seen before in the tech world?

A few months back, I was wrestling with this idea and decided to dive deep into the work of current researchers, entrepreneurs, journalists and anyone exploring this dynamic topic. Today, I just found out ChatGPT can do this very thing I was worried with.

Three videos of coders posting their thoughts on ChatGPT

  1. LETTING AN AI WRITE CODE FOR ME!  - Advent of code solved by ChatGPT!
  2. Using AI To Code Better? ChatGPT and Copilot change everything - another Advent of code video trying to be solved by ChatGPT
  3. ChatGPT - an INSANE AI from OpenAI - It wrote C+ow this is worrying as it can bridge into low level coding (tap into binary code that can speak to hardware....)

I'm deeply worried by this

The third video is indeed troubling - an AGI that can write code to interact with any type of hardware poses a real threat to our technological control. After all, AI alignment has yet to be fully resolved and when combined with this capability, the risk increases manifold.

We really need to solve the AI alignment - the faster the better.

Comments15


Sorted by Click to highlight new comments since:

Hey,

  1. Yeah AI can write some amount of code now
  2. It's not as good as a human developer at all (for now) at almost all important tasks (in my opinion)
  3. I personally recommend you try using it yourself:
    1. To get a sense of what it can and can't do instead of relying on videos (it's not hard, use ChatGPT or install github-copilot)
    2. Because I assume this is going to change how people code soon, and who ever is not used to it will be left behind, I suspect
  4. I also have the sense that when an AI will be able to code as well as a human, we'll have bigger problems
  5. Meta: Probably try to update on your predictable updates in the future: if you need to wait to see the AI make big advances (maybe "warning shots", for you), then you'll update late, or so I think [not an expert]. I mean, assuming you can already know now what will happen

Sure! I'll do a proper write up regarding the problem of AI learning to do machine /assembly coding. I am not worried of computers getting to know coding for web development or proprietary software but hardware coding is a very different area that maybe only aligned AGI should be allowed to touch.

Yeah have seen copilot and yeah will try that definitely. It's amazing and terrifying to see AI knowing how to code C and C++.

(why would assembly be extra problematic? Most languages turn into assembly in the end if they run on a CPU (even if after many stages), so why does it matter?)

Anyway, I'm betting OpenAI will get AI to help them invent better ML models, it might already be happening, and it will surely (in my opinion) snowball

It's when the software can write assembly that can do something important without the benefit of existing libraries or an existing language (for example, C), that's a very general capability, one that would help the software infer how to accomplish goals without the structure or boundaries of typical human uses of computers. It could be more creative than we'd like. That creativity would help an AGI planning to break out of an air-gapped computer system, for example.

"able to write capable code without using existing libraries" - yeah, that shows capabilities.

Doing that specifically in C and not in python? Doesn't worry me as much. If it would happen in python (without using libraries), wouldn't that concern you in a similar amount?

Hm, well, with C you can take advantage of hardware and coding errors a bit more easily, use memory management to do some buggy stuff, but with something like assembly you're closer to working with the core hardware features, maybe taking advantage of features of the hardware design, finding and using CPU bugs, for example to take over management features, using side effects of hardware operation, doing things that might actually be harder to do in C than in assembly, because the compiler would get in the way.

I vaguely recall a discussion in Bostrom's Superintelligence about software that used side effects of hardware function to turn motherboards without wifi into radios or something, I forget the details, but a language compiler tends to be platform independent or compensate for the hardware's deficiencies, an AI that could write assembly wouldn't want that..., hardware idiosyncrasies of the platform would be an advantage to it, it would want to be closer to the machine to find and use those for whatever purposes.

And again, knowing assembly at that level would show capabilities greater than knowing C.

I think an intelligent form that can think, has the capacity to understand and recreate itself is terrifying. That is why I am not supporting general artificial intelligence knowing how to code low level languages. At least javascript or python needs a compiler to become an assembly code and talk to the hardware. I hope I'm not missing something here. If yes please let me know.

I expect AI to be able to rewrite itself by writing python (pytorch?) code. Why wouldn't that be enough? It was originally written by humans in python (probably)

Interesting, Miguel. Thanks for posting this about what's happening in the real world. Yeah, an AI that can develop into a tool that writes assembly (or machine code?) to spec, errmm,that has worrisome applications well before AGI make the scene....

Yes, the capacity to develop code insertions to machines is something we should avoid, like it's a gateway to a skynet situation - I do not see that we are all prepared to what power that can enable us as a species.

Until we find a solution to the AI alignment problem, we (humans) should avoid tinkering technologies that can turn the world upside-down in an instant.

yeah, agreed, though I'm guessing the code isn't very good ... yet. Code that writes code is not a new idea, and using that in various tools is not new, either, I've read about such things before, however, these deep learning language tools are a little unpredictable, so training these machines on code is folly.

I'm sure corporations want to automate code-writing, that means paying programmers less, enforcing coding standards easier, drastically shortening coding time, removing some types of bugs and reducing others. There's various approaches toward that end, something like chatGPT would be a poor choice.

Which makes me wonder why the darn thing was trained to write code at all.

Yeah, why chatgpt was installed with the feature of from natural human language to computer software language and then the hardware language i find this feature very difficult to control once an AGI is implementing hardware code by itself.

I dug deeper and found a coding assistant calles github copilot where it is accessible to write cleaner code but only developers can operate it through developer IDEs (integredated developer environment). Atleast copilot is accessible only to devs with a monthly fee after the trial period.

I hope that feature be eliminated in future chatgpt iterations.

Github copilot has been making waves for a few years among coders, it was one of those meme things on twitter for the last year or so, it's not AI, more code completion with crowd-sourced code samples from stack overflow or wherever. There's another competitor to it that does something similar, I forget the name.

It's not a real worry as far as dangerous AGI, it's about taking advantage of existing code and making it easy to auto-complete with it, basically.

it's not AI, more code completion with crowd-sourced code

Copilot is based on GPT3, so imho it is just as much AI or not AI as ChatGPT is. And given it's pretty much at the forefront of currently available ML technology, I'd be very inclined to call it AI, even if it's (superficially) limited to the use case of completing code.

Sure, I agree. Technically it's based on OpenAI Codex, a descendant of GPT3. But thanks for the correction, although I will add that its code is alleged to be more copied from than inspired by its training data. Here's a link:

Butterick et al’s lawsuit lists other examples, including code that bears significant similarities to sample code from the books Mastering JS and Think JavaScript. The complaint also notes that, in regurgitating commonly-used code, Copilot reproduces common mistakes, so its suggestions are often buggy and inefficient. The plaintiffs allege that this proves Copilot is not “writing” in any meaningful way–it’s merely copying the code it has encountered most often.

and further down:

Should you choose to allow Copilot, we advise you to take the following precautions:

  • Disable telemetry
  • Block public code suggestions
  • Thoroughly test all Copilot code
  • Run projects through license checking tools that analyze code for plagiarism

I think the point of the conversation was a take on how creative the AI could be in generating code, that is, would it create novel code suited to task by "understanding" the task or the context. I chose to describe the AI's code as not novel code by by saying that the AI is a code-completion tool. A lot of people would also hesitate to call a simple logic program an AI, or a coded decision table an AI, when technically, they are AI. The term is a moving target. But you're right, the tool doing the interpreting of prompts and suggesting of alternatives is an AI tool.

Curated and popular this week
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region
 ·  · 10m read
 · 
The Strategic Animal Funding Circle just finished its first two rounds of grantmaking, and we ended up giving ~$900,000 across 16 organisations. Below is some brief reasoning behind why we were excited about these grants and broad factors that resulted in us not granting to other organisations. Overall we were pretty excited about the quality of applications we got and feel optimistic the circle will continue to run/deploy more in the future. At the bottom of this post, you can find more information about the next round, how to apply and how to join as a donor.  Top four reasons we ruled out applications Unclear theory of change The most common reason an organisation was ruled out was for unclear theory of change. This has come up with other funding circles’ explanation copied from a prior writeup we made “Some applicants do not share sufficient reasoning on how their project (in the end) contributes to a better world. Other applicants have a theory of change which seems too complex or involves too many programs. We generally prefer fewer programs with a more narrow focus, especially for earlier-stage projects. Other ToCs simply seem like an inaccurate representation of how well the intervention would actually work. As a starting point, we recommend Aidan Alexander’s post on ToCs.” We particularly saw this challenge with research projects and political projects.  Lack of strong plan, goals or evidence for why a group may achieve success. The groups focused too much on WHAT they wanted to achieve and insufficiently on HOW they planned to achieve this. In the future, we recommend that applicants elaborate on what are their SMART goals, what their plan is to achieve them or what other evidence exists that they will achieve what they planned for example, showing their track record or effectiveness of the intervention in general. This will enable us to judge and build confidence in their ability to execute the project and therefore increase our interest in funding