• AiNexaVerse News
  • Posts
  • OpenAI's ChatGPT Mac App Stored Conversations in Plain Text

OpenAI's ChatGPT Mac App Stored Conversations in Plain Text

In partnership with

Hello AI Lovers!
Today’s Topics Are:

- OpenAI's ChatGPT Mac App Stored Conversations in Plain Text
- Brazil Blocks Meta from Using Social Media Posts to Train AI Models

OpenAI's ChatGPT Mac App Stored Conversations in Plain Text

OpenAI's recently launched ChatGPT macOS app had a significant security flaw: until Friday, it stored user conversations in plain text, making them easily accessible to anyone with access to the machine. This issue was brought to light by Pedro José Pereira Vieito, who demonstrated how straightforward it was to access these files.

Security Flaw Details

Pereira Vieito discovered that by accessing certain files on the computer, he could read the text of ChatGPT conversations. He even created an app to showcase this vulnerability, allowing users to view their conversations with just a click. The problem lay in the fact that the app's stored data was unencrypted, making it susceptible to unauthorized access by malicious actors or apps.

OpenAI’s Response

Upon being contacted by The Verge about the issue, OpenAI promptly released an update to encrypt these conversations. OpenAI spokesperson Taya Christianson stated, "We are aware of this issue and have shipped a new version of the application which encrypts these conversations. We’re committed to providing a helpful user experience while maintaining our high security standards as our technology evolves."

Following the update, Vieito's app no longer worked, and the plain text files were no longer accessible.

Discovery and Context

Vieito explained his motivation: "I was curious about why [OpenAI] opted out of using the app sandbox protections and ended up checking where they stored the app data." Since OpenAI distributes the ChatGPT macOS app solely through its website, it does not adhere to Apple's sandboxing requirements for Mac App Store apps, which might have otherwise mitigated this issue.

Privacy and Security Implications

While OpenAI typically reviews ChatGPT conversations for safety and model training unless users opt out, the lack of encryption posed a risk of exposure to unauthorized third parties. This incident underscores the importance of robust security measures in software handling sensitive data.

Learn AI Strategies worth a Million Dollar in this 3 hour AI Workshop. Join now for $0

Everyone tells you to learn AI but no one tells you where.

We have partnered with GrowthSchool to bring this ChatGTP & AI Workshop to our readers. It is usually $199, but free for you because you are our loyal readers 🎁

This workshop has been taken by 1 Million people across the globe, who have been able to:

  • Build business that make $10,000 by just using AI tools

  • Make quick & smarter decisions using AI-led data insights

  • Write emails, content & more in seconds using AI

  • Solve complex problems, research 10x faster & save 16 hours every week

You’ll wish you knew about this FREE AI Training sooner (Btw, it’s rated at 9.8/10 ⭐)

Brazil Blocks Meta from Using Social Media Posts to Train AI Models

Brazil has blocked Meta from using posts on Instagram and Facebook to train its AI models, following concerns about privacy and data protection. This decision by Brazil’s national data protection agency (ANPD) came shortly after Meta abandoned similar plans in the UK and Europe.

Key Points:

  • Privacy Policy Suspension: The ANPD has immediately suspended Meta’s new privacy policy, which permitted the use of public posts to train generative AI models like chatbots.

  • Meta’s Reaction: Meta expressed disappointment, arguing that their approach complied with local laws and the decision hinders AI innovation and competition in Brazil.

  • Market Impact: Brazil is a significant market for Meta, with 102 million Facebook users and over 113 million Instagram users.

Regulatory Concerns:

  • Risk of Damage: The ANPD acted due to the "imminent risk of serious and irreparable damage" to the fundamental rights of users.

  • Compliance Deadline: Meta has five working days to amend its privacy policy or face a daily fine of R$50,000 (£6,935).

European Comparison:

  • Policy Scrutiny: Similar policy changes faced scrutiny in the UK and EU, where Meta was asked to delay using public posts for AI training by the Irish Data Protection Commission (DPC).

  • Privacy Measures: In Europe, Meta’s policy change would have applied to posts from users over 18, excluding private messages. However, Meta decided to move forward with the policy in Brazil.

Data Protection and Children:

  • Concerns Over Minors: Pedro Martins from Data Privacy Brasil highlighted a discrepancy in Meta’s data protection measures, noting that posts from Brazilian children and teenagers were to be used for AI training, unlike in Europe where the policy applies only to those over 18.

  • Legal Breach: The ANPD found that this could breach Brazil's data protection laws.

User Opt-Out Procedures:

  • Ease of Opt-Out: The process for Brazilian users to opt-out of data usage by Meta is more cumbersome compared to Europe, requiring up to eight steps.

Meta’s Response: The BBC has reached out to Meta for comments on the use of posts from minors and the complexity of the opt-out process in Brazil.

This move by Brazil reflects ongoing global tensions regarding privacy, data protection, and the use of personal information in training AI systems.

That was it for this Weeks News, We Hope this was informative and insightful as always!

We Will Start Something Special Within a Few Months.
We Will Tell you more soon!
But for now, Please refer us to other people that would like our content!
This will help us out Big Time!

Did You Like The News?

Login or Subscribe to participate in polls.