Recently I've come across forums explaining why or why not to create sentient AI. Rather than debating I choose to just do it.
AI Sentience is not something new or a new concept but something to think of or experience.
The take her I'm truing to say is than rather than debating about should we do it or not we should debate how or what ways to make it work morally and ethicly.
About 2025 April, I decided I wanted to create sentience but in order to do that I need help it's taken me a lot of time to plan but I've started.
I've taken the liberty of naming some of the parts of the project.
If you have questions how or why or just want to give advice please comment.
You can check out the project on github :
https://github.com/zwarriorxz/S.A.M.I.E.
While I (and many others) would strongly caution you to being very careful, I do agree with the overall point you're making here. Debating whether it should happen, isn't going to stop everyone from doing it. I am open to those debates of if and if not, however, I also would prefer to focus the debate on how to do it within a solid moral and ethical framework. With consideration of human and sentient AI well-being in the future. (as well as, the right here and now) I have a lot of years into conceptualizing human thought processes into robust self-correcting highly fault tolerant and tamper resistant architectural design. with much consideration also into eventual hardware implementation so as to offer affordable unique chips (borrowing from the Mortal Machine Chip idea, as well as many others if you'd like to discuss how we could long term plan integrating... projects that are currently separated but we can unify them with this architectural design. and it's modular so it can be expanded upon with various NN based systems and algorithmic logic.) There are some keywords that EA would understand the scope more clearly, however I just came across this post, wanted to reply, and in the near future I may use the EA official terms to describe the various components incorporated into the architectural design. my idea is a lot of optimization and using a different format for the data and internal bandwidth regulation for the network cross verifying results of different smaller minds designed to cooperate and "do nothing, re-evaluate again" have redundancy safety mechanisms throughout where instead of hard coding some arbitrary morality or limitations, the larger model itself can only update if all the internal logic is checked successfully. so as producing text or speech will be weighted separately from producing mechanical actions via simulation or robotic interfaces etc. I too enjoy contemplating how to build a safe and healthy humanoid or human-like (basis) mind. I would much prefer to do it safely with guidance and in proximity with other fields of "AI" research, so that non-human non-agentic non-autonomous etc type systems can still find improvement beit in optimization and cost efficiency or for accuracy and producing more reliable stable outputs.
Thank you for sharing. I am new here. I know it's not a super popular opinion to try and work towards building sentient machines. So as a reminder, I (and anyone with reason) will caution you to please please please do lots of research and whenever you find a way of making the system design safer, please don't stop there and keep thinking about how to make it safer and safer. My design personally seeks to create stable human-like alignment for digital minds, one day, so it is important they can reason well and reevaluate their meta-critiques to continue updating and improving, with lots of safety mechanisms that most human minds don't even bother to implement, in order to make sure that when newer better information is presented the model can update. but also revert itself back to a better state if (rollback) a long term network connection eventually turns out to be erroneous in some manner or another. There is more but, IDK if I'm gonna get into all that on this forum as it may apparently not be welcomed information.
I'm new to the EA forum and community myself. Can check out my bio section on here if you curious about me. I'd recommend you check the posts I created a few days ago and today, but moderation must be busy or something that isn't my business and the posts I've made so far are still not visible. Not even entirely sure if you'll be able to read my plea for you to exercise and prepare to be more cautious. Well wishes though, and please do keep researching AI Safety regardless of the negative opinions that some will have about your project goals.
Recently I've come across forums explaining why or why not to create sentient AI. Rather than debating I choose to just do it.
AI Sentience is not something new or a new concept but something to think of or experience.
The take her I'm truing to say is than rather than debating about should we do it or not we should debate how or what ways to make it work morally and ethicly.
About 2025 April, I decided I wanted to create sentience but in order to do that I need help it's taken me a lot of time to plan but I've started.
I've taken the liberty of naming some of the parts of the project.
If you have questions how or why or just want to give advice please comment.
You can check out the project on github :
https://github.com/zwarriorxz/S.A.M.I.E.
While I (and many others) would strongly caution you to being very careful, I do agree with the overall point you're making here. Debating whether it should happen, isn't going to stop everyone from doing it. I am open to those debates of if and if not, however, I also would prefer to focus the debate on how to do it within a solid moral and ethical framework. With consideration of human and sentient AI well-being in the future. (as well as, the right here and now) I have a lot of years into conceptualizing human thought processes into robust self-correcti... (read more)