KartavyaDesk
news

We’ll probably look back on 2022 as the year generative Artificial Intelligence (AI) exploded into public attention, as image-generating systems from OpenAI and Stability AI were released, prompting a flood of fantastical images on social media. Last week, researchers at Meta announced an AI system that can negotiate with humans and generate dialogue in a strategy game called Diplomacy. Venture capital investment in the field grew to $1.3 billion this year, according to Pitchbook, even as it contracted for other areas in tech.

Kartavya Desk Staff

Topic: Case Study

Topic: Case Study

7. We’ll probably look back on 2022 as the year generative Artificial Intelligence (AI) exploded into public attention, as image-generating systems from OpenAI and Stability AI were released, prompting a flood of fantastical images on social media. Last week, researchers at Meta announced an AI system that can negotiate with humans and generate dialogue in a strategy game called Diplomacy. Venture capital investment in the field grew to $1.3 billion this year, according to Pitchbook, even as it contracted for other areas in tech.

The digital artist Beeple was shocked in August when several Twitter users generated their own versions of one of his paintings with AI-powered tools. Similar software can create music and videos. The broad term for all this is ‘generative AI’ and as we lurch to the digital future, familiar tech industry challenges like copyright and social harm are re-emerging.

Last year, Meta unveiled Galactica, a language system specializing in science that could write research papers and Wikipedia articles. Within three days, Meta shut it down. Early testers found it was generating nonsense that sounded dangerously realistic, including instructions on how to make napalm in a bathtub and Wikipedia entries on the benefits of being Caucasian or how bears live in space. The eerie effect was facts mixed in so finely with hogwash that it was hard to tell the difference between the two. Political and health-related misinformation is hard enough to track when it’s written by humans.

Now, we have deepfake videos of politicians, actors and prominent personalities which can cause irreparable harm to our society and democratic institutions.

What are the ethical issues in the above case? How can we have ‘ethical AI’? Suggest measures that must be taken to prevent moral damage that can from AI.

What are the ethical issues in the above case? How can we have ‘ethical AI’? Suggest measures that must be taken to prevent moral damage that can from AI.

What are the ethical issues in the above case?

How can we have ‘ethical AI’?

Suggest measures that must be taken to prevent moral damage that can from AI.

AI-assisted content, editorially reviewed by Kartavya Desk Staff.

About Kartavya Desk Staff

Articles in our archive published before our editorial team was expanded. Legacy content is periodically reviewed and updated by our current editors.

All News