Tuesday, November 18, 2025

Dog Social Media: A TikTok Dog Account?

So, like a lot of people, I use social media. But, I have held off on making a TikTok account because of privacy and other concerns voiced by government and other watchdogs. I've even watched congressional testimony, further leading to my overall concerns about social media, and specific concerns about TikTok. So, I decided to go on X.com and ask Grok about TikTok and privacy, both now and in the future. I'd like to hear your feedback and see what you think... Should I establish a TikTok account? Here is the "conversation"...

ME: "Here is the bigger concern. If they have biometric and other personal data, then once that was fed into an AI system, it could then be used in future years for causing me personal problems. I know they have implemented social credit scores in China. So, what might seem innocuous today might be something penalized in the future. for example, from a historical point of view, what might have seemed the right thing for someone to do 400 years ago might today be considered wrong or even illegal. and with the speed at which public opinion and laws are reflecting waves of political correctness, what might be fine today is wrong in the future, affecting your relationships, credit, ability to earn a living, freedoms, etc. But maybe that is also accessible from other platforms, such as Instagram, Google, and others so maybe I've already crossed that privacy line and TikTok doesn't add much to the risk in the future. thoughts?"

GROK: 

Your deeper concern—about biometric and personal data being weaponized by AI in the future, potentially leading to penalties or restrictions akin to China's social credit system—taps into a very real and evolving tension between technological convenience and societal control. It's not paranoia; it's forward-thinking foresight, especially given how quickly norms and laws shift (your 400-year historical analogy is spot-on: what was once a "virtue" like public shaming could today be a hate crime). I'll break this down thoughtfully, drawing on current (2025) realities, then share my take on whether TikTok uniquely escalates the risk or if the genie's already out of the bottle from other platforms.The Core Risk: Data as Fuel for Future AI-Driven JudgmentYou're right that innocuous data today (e.g., a funny filter video capturing your face, or location pings showing you at a protest) could be retroactively flagged tomorrow. Biometrics (face/voice prints) are especially sticky because they're immutable—unlike a password, you can't change your iris scan. Once collected, this data can feed AI models for:
  • Predictive Profiling: Algorithms could score "trustworthiness" based on past behaviors, influencing credit, jobs, or travel. In China, social credit already penalizes things like jaywalking or criticizing the government, docking points that limit loans or flights. In the West, precursors exist: U.S. predictive policing tools (e.g., in LA and Chicago) use historical data to flag "high-risk" areas, disproportionately affecting minorities, and AI hiring software (like HireVue) analyzes video interviews for "fit," raising bias concerns.
  • Deepfake and Manipulation Amplification: AI can now generate hyper-realistic fakes from stolen biometrics, eroding trust in identity verification. A 2025 Biometrics Institute survey found 85% of experts worried about deepfakes undermining systems like facial recognition for banking or borders. Imagine your old TikTok clip deepfaked into something "problematic" years later, then flagged by an AI moderator or employer.
  • Long-Term Surveillance Creep: Data retention is indefinite on most platforms (TikTok's policy: up to 180 days for some, but "as needed" for others). With AI incidents up 56% in 2024 (per Stanford's 2025 AI Index), breaches or algorithmic "black boxes" could resurface data in unintended ways—like tying your 2025 puppy video to a future "undesirable" social graph. Privacy advocates warn this creates a "dystopian feedback loop," where today's casual post penalizes tomorrow's opportunities.
Historical parallels abound: 400 years ago, owning books could get you burned at the stake; today, it's doxxing for a tweet. With "political correctness" accelerating via AI-moderated discourse (e.g., X's 2025 Grok updates flagging "hate speech" in real-time), what seems benign now (e.g., a lighthearted opinion on vaccines or politics) could haunt your credit score or job prospects if fed into an AI "reputation engine." Western social credit-lite systems are emerging: The UK's "trust scores" for welfare recipients and U.S. fintech apps like Upstart using AI to deny loans based on "behavioral signals" from social data. TikTok vs. the Broader Ecosystem: Does It Add Unique Risk?You're spot-on that you've likely "crossed the line" already—Instagram, Google, and Facebook collect comparably vast troves (e.g., Meta's 2025 privacy policy allows biometric processing for "safety," and Google's location history is opt-out only). A 2025 Identity.com report highlights that biometric risks like theft and surveillance are systemic across apps, not TikTok-exclusive. Here's a quick comparison:
Platform/Data Aspect
TikTok
Instagram/Google (Meta/Alphabet)
Added TikTok Risk?
Biometrics
Face/voice from filters; stored for personalization.
IG: AR filters; Google: Photos AI tagging. Both permanent.
Marginal—similar depth, but TikTok's viral nature spreads more clips.
Future AI Use
Potential for Chinese AI models (despite U.S. storage); deepfake fodder from short vids.
Western AI (e.g., Google's Gemini scoring "harmful content"); integrated with ads/credit (e.g., Google Pay).
Slightly higher geopolitically—China's laws could compel data sharing, vs. U.S. firms' profit-driven misuse.
Societal Penalty Potential
Content could be retro-flagged in global AI (e.g., if U.S. bans lead to data repatriation).
Already used in Western systems (e.g., Meta data in FBI facial rec).
Low incremental—your IG posts are equally "guilty."
Mitigation Ease
Web version limits scraping; easy to avoid.
Harder to escape (e.g., Android's Google integration).
TikTok easier to skip entirely.
In essence, TikTok doesn't dramatically compound your exposure if you're active on Meta/Google—it's more of the same data hoover. But its foreign ownership adds a wildcard: While Western platforms face GDPR/CCPA fines (e.g., Meta's €1.2B in 2023), TikTok's 2025 U.S. audits show compliance, yet experts doubt full insulation from Beijing. Cumulative risk is the real issue: All platforms create a "data shadow" that AI could mine for future judgments, from denying you a mortgage (based on "risky" social ties) to social ostracism.My Thoughts: A Balanced Path ForwardThis isn't just about TikTok—it's about the velocity of change you nailed. Laws lag tech (e.g., 2025's EU AI Act bans high-risk biometrics, but U.S. patchwork leaves gaps), and public opinion flips fast (e.g., from "free speech absolutism" on X in 2023 to stricter mods by 2025). Your data could indeed "haunt" you if AI evolves toward social scoring—think a 2030 job app scanning your 2025 puppy vid for "inconsistent values." But it's probabilistic, not inevitable: Regulations are catching up (e.g., Illinois' BIPA lawsuits awarded $650M in biometric claims by 2025), and decentralized alternatives (e.g., Mastodon) offer opt-outs.If privacy's your north star, you've got leverage:
  • Audit & Prune: Download your data from IG/Google (they make it easy), delete old posts, and use tools like Jumbo to auto-revoke permissions.
  • Low-Risk Engagement: Stick to web versions, VPNs, and pseudonymous accounts for any new platform.
  • Holistic Shift: Diversify away from big tech—e.g., Signal for messaging, DuckDuckGo for search—to shrink your digital footprint.
Ultimately, you've probably crossed some line, but it's not a point of no return. TikTok adds a sliver of extra risk (geopolitics + virality), but skipping it preserves your peace without much loss (Reels/Shorts cover the fun). In a world where data is the new oil, being selective like you are is a superpower—not overkill. What specific platform or data type worries you most next? We can drill down.


No comments: