I'd summarize your views as: "it's not possible to solve this problem because nobody has solved it so far and everyone else says it can't be solved".
I believe the correct ethics are to pursue our individual and collective goals without creating any problems for others.
I wrote a book about the meaning of life, you can find it here: https://www.amazon.com/dp/B08512QKY9
You could start with the book <em>Singularity Hypotheses.</em> After each paper they have a short response from another author. There are a few papers that deal with the control problem, and a few philosophers who contributed, but I don't think there's ever a time when a philosopher responds to another philosopher. Normally one of them is a scientist. It will at least give you a few good ideas. Most of the chapters are available online if you search for them by title.
You can also try this talk from Tim Mulgan where he critiques Bostrom (I haven't read it closely, though).
Description from the host:
> Brian and I discuss a range of topics related to his latest book, The Alignment Problem: Machine Learning and Human Values. The alignment problem asks how we can build AI that does what we want it to do, as opposed to building AI that will compromise our own values by accomplishing tasks that may be harmful or dangerous to us. Using some of the stories Brain relates in the book, we talk about:
Timestamps:
4:22 – Increased work on AI ethics 8:59 – The Alignment Problem overview 12:36 – Stories as important for intelligence 16:50 – What is the alignment problem 17:37 – Who works on the alignment problem? 25:22 – AI ethics degree? 29:03 – Human values 31:33 – AI alignment and evolution 37:10 – Knowing our own values? 46:27 – What have learned about ourselves? 58:51 – Interestingness 1:00:53 – Inverse RL for value alignment 1:04:50 – Current progress 1:10:08 – Developmental psychology 1:17:36 – Models as the danger 1:25:08 – How worried are the experts?