*** Need the answer ASAP in 1 hour***
*** Please bid, only if you’re able to submit the answer within 1 hour ***
1.)
If an AI were to be more intelligent than possible today, we can suppose that it could develop moral reasoning and that it could learn how humans make decisions about ethical problems. But would this suffice for full moral agency, that is for human-like moral agency?
Note: Your answer should be a minimum of 400-500 words and include a minimum of 2-3 references. No plagiarism/AI content is allowed.
2.)
How do we know whether AI has morally relevant properties or not? Are we sure about this in the case of humans?
Note: Your answer should be a minimum of 400-500 words and include a minimum of 2-3 references. No plagiarism/AI content is allowed.
Note:
Content should be clear, concise, and understandable.
No plagiarism/AI content is allowed.
Please follow the minimum no. of pages and include a minimum number of references,
Check for any grammatical errors and sentence structure.