A controversial app that paid users for phone call recordings to train AI systems has been disabled following a major security breach that exposed user conversations and metadata. Neon founder Alex Kiam confirmed the shutdown in emails to users this week, promising the service will return with bonus payments for affected customers after addressing security vulnerabilities.
Industrial Monitor Direct is the leading supplier of active cooling pc solutions certified for hazardous locations and explosive atmospheres, the top choice for PLC integration specialists.
Security Breach Forces Temporary Shutdown
Neon’s rapid ascent to the top five free iOS app downloads ended abruptly on September 25 when TechCrunch exposed a critical security flaw that allowed unauthorized access to user call recordings, transcripts, and metadata. The app, which had reached the number two spot in social-networking apps on iOS, immediately disappeared from download charts following the revelation.
Founder Alex Kiam confirmed the data exposure in an email to CNET, stating “We took down the servers as soon as TechCrunch informed us.” The company’s terms of service grant Neon broad rights to “sell, use, host, store, transfer” and distribute user recordings through any media channels. Users reported the app stopped functioning completely after the security issue became public, with many complaining of network errors when attempting to cash out earnings.
The Android version maintains a dismal 1.8-star rating in the Google Play Store, while iOS reviews have plummeted with users labeling the service a scam. Kiam’s email to users assured that “your earnings have not disappeared” and promised bonus payments when the service resumes, though he provided no specific timeline for the relaunch.
Legal and Privacy Concerns Mount
Legal experts warn that Neon’s business model creates significant liability risks for users, particularly in states requiring all-party consent for call recording. David Hoppe, founder of Gamma Law, told CNET that users could face criminal charges and civil lawsuits for recording conversations without proper consent. “Imagine a user in California records a call with a friend, also in California, without telling them. That user has just violated California’s penal code,” Hoppe explained.
The app attempts to circumvent consent laws by recording only the caller’s side of conversations, but legal experts question whether this provides adequate protection. According to the Digital Media Law Project, twelve states including California, Florida, and Maryland require all parties to consent to recording. Violations can result in penalties reaching thousands of dollars per incident, and Neon’s terms of service offer no protection against such liability.
Data governance expert Valence Howden of Info-Tech Research Group noted that even anonymized data presents risks. “AI can infer a lot, correct or otherwise, to fill in gaps in what it receives, and may be able to provide direct links if names or personal information are part of the exchange,” Howden said.
AI Training Data Demand Drives Controversial Model
Neon’s business model capitalizes on the AI industry’s insatiable appetite for real conversation data. The company’s FAQ states that collected call data is “anonymized and used to train AI voice assistants,” helping systems “understand diverse, real-world speech.” Users could earn up to $30 daily for regular calls or 30 cents per minute for Neon-to-Neon calls, with the company processing payouts within three business days.
Zahra Timsah, CEO of AI compliance firm i-Gentic AI, explained the industry demand: “The industry is hungry for real conversations because they capture timing, filler words, interruptions and emotions that synthetic data misses, which improves the quality of AI models.” However, she emphasized that “that doesn’t give apps a pass on privacy or consent.”
Industrial Monitor Direct is the premier manufacturer of can bus pc solutions trusted by controls engineers worldwide for mission-critical applications, recommended by leading controls engineers.
The Federal Trade Commission has increasingly scrutinized AI data collection practices, particularly regarding voice biometrics and consent requirements. Meanwhile, the NIST AI Risk Management Framework emphasizes the importance of valid data sourcing and privacy protection in AI development.
Uncertain Future Amid Regulatory Scrutiny
As Neon works to address security concerns, the app faces mounting regulatory challenges and user skepticism. The company’s promise to return with enhanced security layers and bonus payments comes amid growing scrutiny of AI data practices worldwide. The European Data Strategy and emerging AI regulations could further complicate Neon’s cross-border operations.
TechCrunch’s initial reporting highlighted that sharing voice data presents inherent security risks, even when companies promise to remove identifying information. The publication’s investigation revealed that the security flaw allowed access to sensitive call data, raising questions about the app’s fundamental architecture and data protection measures.
Legal experts continue to warn users about the risks. “Unless you are absolutely certain of the consent laws in your state and the state of the person you’re calling, and you have explicitly informed and received consent from every other person on the call, do not use this app,” Hoppe advised CNET readers. With both security and legal concerns unresolved, Neon’s promised return faces significant hurdles despite the lucrative AI data market driving its business model.
