ai
  • Crypto News
  • Ai
  • eSports
  • Bitcoin
  • Ethereum
  • Blockchain
Home»Ai»The Download: How China’s universities approach AI, and the pitfalls of welfare algorithms
Ai

The Download: How China’s universities approach AI, and the pitfalls of welfare algorithms

Share
Facebook Twitter LinkedIn Pinterest Email

Just two years ago, students in China were told to avoid using AI for their assignments. At the time, to get around a national block on ChatGPT, students had to buy a mirror-site version from a secondhand marketplace. Its use was common, but it was at best tolerated and more often frowned upon. Now, professors no longer warn students against using AI. Instead, they’re encouraged to use it—as long as they follow best practices.

Just like those in the West, Chinese universities are going through a quiet revolution. The use of generative AI on campus has become nearly universal. However, there’s a crucial difference. While many educators in the West see AI as a threat they have to manage, more Chinese classrooms are treating it as a skill to be mastered. Read the full story.

—Caiwei Chen

If you’re interested in reading more about how AI is affecting education, check out:

+ Here’s how ed-tech companies are pitching AI to teachers.

+ AI giants like OpenAI and Anthropic say their technologies can help students learn—not just cheat. But real-world use suggests otherwise. Read the full story.

+ The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better. Read the full story.

+ This AI system makes human tutors better at teaching children math. Called Tutor CoPilot, it demonstrates how AI could enhance, rather than replace, educators’ work. Read the full story.

Why it’s so hard to make welfare AI fair

There are plenty of stories about AI that’s caused harm when deployed in sensitive situations, and in many of those cases, the systems were developed without much concern to what it meant to be fair or how to implement fairness.

But the city of Amsterdam did spend a lot of time and money to try to create ethical AI—in fact, it followed every recommendation in the responsible AI playbook. But when it deployed it in the real world, it still couldn’t remove biases. So why did Amsterdam fail? And more importantly: Can this ever be done right?

Join our editor Amanda Silverman, investigative reporter Eileen Guo and Gabriel Geiger, an investigative reporter from Lighthouse Reports, for a subscriber-only Roundtables conversation at 1pm ET on Wednesday July 30 to explore if algorithms can ever be fair. Register here!

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Safeguarding Agentic AI Systems: NVIDIA’s Open-Source Safety Recipe

juillet 29, 2025

Creating a Knowledge Graph Using an LLM

juillet 29, 2025

Microsoft Edge Launches Copilot Mode to Redefine Web Browsing for the AI Era

juillet 28, 2025

Zhipu AI Just Released GLM-4.5 Series: Redefining Open-Source Agentic AI with Hybrid Reasoning

juillet 28, 2025
Add A Comment

Comments are closed.

Top Posts

SwissCryptoDaily.ch delivers the latest cryptocurrency news, market insights, and expert analysis. Stay informed with daily updates from the world of blockchain and digital assets.

We're social. Connect with us:

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

EVO Vegas 2025: Evolution, the fantastical Fighting Game convention, is set to kick off in Las Vegas with tournaments for all

juillet 29, 2025

Bitcoin Price Stays Near $119,000 Despite Possible New Galaxy Digital Sales

juillet 29, 2025

ETH Nears $4K After SharpLink Buy, Metaplanet Adds More BTC

juillet 29, 2025
Get Informed

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

Facebook X (Twitter) Instagram Pinterest
  • About us
  • Get In Touch
  • Cookies Policy
  • Privacy-Policy
  • Terms and Conditions
© 2025 Swisscryptodaily.ch.

Type above and press Enter to search. Press Esc to cancel.