I subscribe to handful of investment-related YouTube channels. This pattern has been common for years. A bot will reply with a comment loosely related to the video and about how something worked for them. Another bot will reply asking how they did that. Another bot (not the original commenter) will reply that they worked with so-and-so or invested in such-and-such, and then there will be maybe four or five more comments responding to that. All obvious bot accounts.
It's obvious on the channels, because these reply sets usually don't contain a lot of replies to comments (if there are any comment replies, it's almost always from the channel owner). It's so obvious, in fact, that I'm surprised YouTube hasn't done something to address it.
Oh I love these comment threads! I like to add another reply saying something like “oh my goodness, I used Elizabeth Ferguson for my investing too!! She went to my college, so I thought I could trust her. But then I found out she was cheating on me with my wife! We got a divorce and i lost half my assets in the separation. Elizabeth Ferguson probably is enjoying them now :(. Just one experience, but buyer beware!”
I'd be careful with that. Sounds like you could be mistaken for a bot that is part of the scheme and get your Google account banned.
Then again, you should live under the assumption that your Google account could be banned at any time with no recourse. You do have local backups of all your Google account data and don't need your Gmail account to access anything important, right?
That makes me realize that banning is a punishment only usable on people who care about their account. Scammers don’t, a new bot account is a click away. But basilikum would be sad to lose his account.
they're scamming, no way the can get those invesment results the bots say, I don't think a real person doing normal stuff would use those bots services
It's been well know to happen on reddit too for many years. Whole posts and comment threads copied verbatim with new accounts. Nowadays with AI you can make it way more dynamic.
I've acquired a sense for at least some of the bots. There's this set of bots that post a high-engagement post about once a day to an implausibly large range of subreddits, with implausible regularity. I can tell by the way I remove them and the way that the other subs are mostly not that most subs have not figured this out yet.
There is an obvious solution to that problem, which I haven't wanted to put out there, but I've become increasingly suspicious that it's already been figured out anyhow, which is to limit a specific user account to a specific "persona" with plausible interests and posting rates.
And that's where I think the race may well end, victory spammers. If there's a winning move against that in general I haven't figured it out.
I know reddit is concerned about this at the corporate level but I'm not sure they realize this is possibly their #1 threat, towering above all others. Not that I have any specific suggestions about what to do about it either. And it's years before the masses realize this and stop visiting, and by the time that happens all the social media companies are going to be in trouble for the same reason. You can see the leading edge here on HN but it's still only an almost negligible fraction of the total userbase of something like Reddit today. But that will change.
Out of curiosity, has anyone noticed a non-negligible presence of bots in threads on HN? I haven't, but I'm not sure if that's because I'm bad at spotting them or because HN is good at getting rid of them or because HN is a niche platform.
> It's been well know to happen on reddit too for many years
"For many years" being around 20 years at this point. Not sure reddit is a great example, given the founders admitted to using sockpuppets almost since day 1 in order to generate fake activity on the platform.
Have you seen the same chain pattern outside finance yet? Wonder whether investment scams are the most conspicuous because the payout per convert is high or whether it's seeded the widest on YouTube specifically.
I saw something like this for a book. It was under an Instagram reel where the person was describing ways to improve your self-esteem. In the comments section someone mentioned a book that worked for them and it had a few replies saying how it worked for them too. I searched for the book and it was a very new book from an unknown author and zero reviews everywhere.
Yes and what they do is use actual registered investment advisors names and set up scam websites for them. This way it's more legitimate because if you research that person you will find that they are actually registered in official databases.
I’ve been seeing this kind of spam on forums all the way back in 2004. I wonder if it was a feature in Xrumer or whatever they used to post spam back then.
If you have a forum and haven’t found a thread that is just one guy arguing with himself on twelve sock accounts; well then you haven’t been looking or only have one user.
Generally when people start having a back and forth about a product I assume it’s astroturfing unless it makes sense in context and/or it’s just one of those brands people genuinely get excited about (they tend to be obvious ones you’ve seen a lot already).
Doesn’t mean I don’t ever get duped, but idk. You learn to spot the signs. I imagine most of us on HN catch most instances. Genuine-seeming referrals aren’t as easy to fake as one would think.
I’m putting together an AI presentation internally for my company, can anyone point to examples of this exact behavior? I’d like to use it as a reference.
About 20 years ago when I was in university, I was looking for anything to do part time and posting links on random blogs in replies was one of the jobs that could get you some money. The job came with excel sheets containing where to post those links and what to post. I was more interested in automating this process. There was a captcha involved in this process to somewhere. We didn't see it as spamming. It was 'ad posting' job or something.
Don't remember a lot. Just reminded me of my time doing this. I don't remember actually posting much links myself because it was too boring and lots of manual work for not much benefit.
which was a lot better than other products on the market and solved difficult problems like CAPTCHAs and email verification links and was famous for a "conversational" advertising campaign which generated results like
This is a big problem for wordpress, but custom engines with a simple client-side checks (js based) get close to zero spam. All those spammers use technology fingerprinting services to obtain a list of blogs and they look for popular blog engines only.
"Remember, there are no technological solutions to social problems."
is something I want to counterpoint with "there are no social solutions to technological problems", like how the looming situation pointed out by the Club of Rome in 1973
would be difficult enough to solve in a socially cohesive society run by philosopher kings. Practically you have a choice between democracies which have a 0 probability of being adequate to the task (against the axioms of political science: it's like a perpetual motion machine which violates the first and second laws of thermodynamics and then the old professor chimes in and says it must violate the third too) and autocracies which might get lucky 10⁻¹² of the time; even if the tech fix [1] has a 10⁻³ chance of successfully kicking the can down the road I'd take that chance.
[1] say: liquid salt (not metal) fast breeder reactor with a supercritical CO2 powerset
Yeah, I’ve heard that there is some very axiomatic math to being anti-democratic for some reason. Unlike the mathematically sound benevolent and also omniscient philosopher kings.
Jack Beagle
@blog the ones in your screenshot are pretty good because they are a bit more conversational. I use <product> myself because generally these types of spam messages will be trying to promote something specific but outside of the second message in your example it might have still snuck through. As the LLMs get better the spam messages will certainly get better.
I’m not a heavy Reddit user but I’ve noticed a sharp increase in comment spam disguised as real discussion.
I think the turning point was when they allowed accounts to hide their comment history. Before, when you could click on an account and read all of their other comments it was easy to tell when an account only existed for fake conversations about a product they were spamming.
Now the spam accounts hide their comment history so they can do nothing but spam similar comments all over Reddit and walk the line where it’s not obvious if any single comment is spam or an one off comment from someone trying to be helpful.
Users are using Google and other services to find their other posts and post warnings, but it takes so much more effort now.
Moderators can still see the full comment history.
The advantage of hiding one's comments is precisely that they do not show up as easy as before on Google, discussions don't get derailed because of comment muckrakers going through their opponent's post and accuse them of being anti-zionist or pro-zionist, and actual stalkers have a far more difficult time tracking down victims.
I have noticed the same uptick in bot-like behaviour there. The part I struggle to square is, why so much of it is so useless?
It's maybe account laundering, but on any popular post you'll see at least half of the comments are tangential at best. They're not an expression of anything a person would express, like replying with just skull emojis to a random news post, or saying "he really said" with an exact word to word recreation of a throwaway quote from a video. No one ever replies to these posts, they get like 2 upvotes (if that), the platform doesn't reward them at all but they constantly appear in a very artificial looking way.
Just a thought, but I wonder if Reddit are hiding this information deliberately to prevent anyone from publishing a study estimating what percentage of their traffic is driven by bots (anecdotally, it's a lot - and they used to be mostly organic even half a decade ago).
This is a direct result of pretty much all of the LLMs using Reddit as a training tool. People are selling GEO services with reddit spam being a big part of that.
This has been a thing since blogs became a widespread thing 25+ years ago. Especially with the advent of Wordpress. It was even a “commonly accepted” SEO tactic for awhile.
Nice. I run a site that depends on user submitted content, and it's really interesting to observe how some people try to get around the guardrails. Not sure if your tool does this, but I would perform some additional checks for comments that have links in them.
The post timing is the main giveaway. Surely it wouldn't be that hard to space out these spam posts. The amount of automated comments being spammed on all social platforms is not quite at tipping point, but has significantly increased.
Bots would win over all anti-spam, anti-slop measures. All blog posts and comments everywhere would be filled with spam and slop. That's when humanity turns it head away from screens, back towards other humans nearby and start talking to each other, while the ocean of slop and spam keep bubbling, infested bots.
All replies that just pads out the comment with a summary about TFA look like spam. There is no inkling of any excuse to make the comment so it just has to regurgitate what TFA is about.
The scariest part is that humans are starting to use AI to generate spam comments, which in turn get used to train the models. Will the language capabilities of these models just keep getting worse?
Text generation is now cheap, so I expect this problem to worsen. I hate to write it, but I don't see any other solution on platforms, that aspire to be a modern agora, than identity verification ...
Why would identity verification solve this? The spammer can just verify himself, or if he doesn't want to or it's at a bigger scale than individual, then there will be services where you can get identity verifications on the cheap and they'll work either by paying people in a poor country to verify themselves all day, or, even more cheaply, sketchy age verification services on sketchy porn sites will be actually proxying or replaying people's verifications to another service of your choice
We're probably close to the point for social networks where authenticity/transparency is more valuable than network effect.
You could probably have a workable social network just with the following properties:
1. Use combination of digital friend-of-a-friend invite chains as well as sign-ups at physical 3rd spaces to build out the social network
2. If a user's account is found to be abusing the network, kick the user that invited that account plus everyone in that branch of the invite tree
3. To re-enable an account from a kicked branch, each user has to visit one of your 3rd spaces (and temporarily lose their invite privileges after re-enabling).
4. Security engineers do what they normally do at social media companies, except you now incentivize them to publicize efforts to reveal attackers so that you generate foot traffic at the 3rd spaces.
Now instead of hiding a report that grandma is friends with a Russian bot, your giddy security team does a publicity stunt to kick 100,000 users on Thursday.
And that will generate record drink sales at your 3rd spaces on Friday. (Senior citizen's discount applies.)
It did solve the spam/russian bot problem on https://www.lide.cz/ . You have to verify yourself using a national ID and you discuss under your citizen name. The conversation is since somehow way more thoughtful and civic than say on FB.
Not that I am happy with it, it would be ideal to have my old internet back.
I also see a ton of this here on HN as the political topics have ramped up.
Not enough people are flagging those when it aligns with their bias. It's even less likely to get flagged when it's a double whammy of politics and AI. Loosely being about AI should not give it a free pass.
It's obvious on the channels, because these reply sets usually don't contain a lot of replies to comments (if there are any comment replies, it's almost always from the channel owner). It's so obvious, in fact, that I'm surprised YouTube hasn't done something to address it.
Then again, you should live under the assumption that your Google account could be banned at any time with no recourse. You do have local backups of all your Google account data and don't need your Gmail account to access anything important, right?
I've acquired a sense for at least some of the bots. There's this set of bots that post a high-engagement post about once a day to an implausibly large range of subreddits, with implausible regularity. I can tell by the way I remove them and the way that the other subs are mostly not that most subs have not figured this out yet.
There is an obvious solution to that problem, which I haven't wanted to put out there, but I've become increasingly suspicious that it's already been figured out anyhow, which is to limit a specific user account to a specific "persona" with plausible interests and posting rates.
And that's where I think the race may well end, victory spammers. If there's a winning move against that in general I haven't figured it out.
I know reddit is concerned about this at the corporate level but I'm not sure they realize this is possibly their #1 threat, towering above all others. Not that I have any specific suggestions about what to do about it either. And it's years before the masses realize this and stop visiting, and by the time that happens all the social media companies are going to be in trouble for the same reason. You can see the leading edge here on HN but it's still only an almost negligible fraction of the total userbase of something like Reddit today. But that will change.
But definitely, bots on reddit seem significantly more common in the past year or two.
"For many years" being around 20 years at this point. Not sure reddit is a great example, given the founders admitted to using sockpuppets almost since day 1 in order to generate fake activity on the platform.
https://claimyr.com/government-services/irs/I-filed-my-2021-...
“Wow! Seems like it’s so easy to change over with savings like that!”
Doesn’t mean I don’t ever get duped, but idk. You learn to spot the signs. I imagine most of us on HN catch most instances. Genuine-seeming referrals aren’t as easy to fake as one would think.
Don't remember a lot. Just reminded me of my time doing this. I don't remember actually posting much links myself because it was too boring and lots of manual work for not much benefit.
https://en.wikipedia.org/wiki/XRumer
which was a lot better than other products on the market and solved difficult problems like CAPTCHAs and email verification links and was famous for a "conversational" advertising campaign which generated results like
https://www.garagejournal.com/forum/threads/give-me-link-to-...
https://en.wikipedia.org/wiki/The_Limits_to_Growth
would be difficult enough to solve in a socially cohesive society run by philosopher kings. Practically you have a choice between democracies which have a 0 probability of being adequate to the task (against the axioms of political science: it's like a perpetual motion machine which violates the first and second laws of thermodynamics and then the old professor chimes in and says it must violate the third too) and autocracies which might get lucky 10⁻¹² of the time; even if the tech fix [1] has a 10⁻³ chance of successfully kicking the can down the road I'd take that chance.
[1] say: liquid salt (not metal) fast breeder reactor with a supercritical CO2 powerset
Jack Beagle @blog the ones in your screenshot are pretty good because they are a bit more conversational. I use <product> myself because generally these types of spam messages will be trying to promote something specific but outside of the second message in your example it might have still snuck through. As the LLMs get better the spam messages will certainly get better.
I think the turning point was when they allowed accounts to hide their comment history. Before, when you could click on an account and read all of their other comments it was easy to tell when an account only existed for fake conversations about a product they were spamming.
Now the spam accounts hide their comment history so they can do nothing but spam similar comments all over Reddit and walk the line where it’s not obvious if any single comment is spam or an one off comment from someone trying to be helpful.
Users are using Google and other services to find their other posts and post warnings, but it takes so much more effort now.
The advantage of hiding one's comments is precisely that they do not show up as easy as before on Google, discussions don't get derailed because of comment muckrakers going through their opponent's post and accuse them of being anti-zionist or pro-zionist, and actual stalkers have a far more difficult time tracking down victims.
It's maybe account laundering, but on any popular post you'll see at least half of the comments are tangential at best. They're not an expression of anything a person would express, like replying with just skull emojis to a random news post, or saying "he really said" with an exact word to word recreation of a throwaway quote from a video. No one ever replies to these posts, they get like 2 upvotes (if that), the platform doesn't reward them at all but they constantly appear in a very artificial looking way.
Sent via haiker.app — My handmade Hacker News app
You could probably have a workable social network just with the following properties:
1. Use combination of digital friend-of-a-friend invite chains as well as sign-ups at physical 3rd spaces to build out the social network
2. If a user's account is found to be abusing the network, kick the user that invited that account plus everyone in that branch of the invite tree
3. To re-enable an account from a kicked branch, each user has to visit one of your 3rd spaces (and temporarily lose their invite privileges after re-enabling).
4. Security engineers do what they normally do at social media companies, except you now incentivize them to publicize efforts to reveal attackers so that you generate foot traffic at the 3rd spaces.
Now instead of hiding a report that grandma is friends with a Russian bot, your giddy security team does a publicity stunt to kick 100,000 users on Thursday.
And that will generate record drink sales at your 3rd spaces on Friday. (Senior citizen's discount applies.)
Not that I am happy with it, it would be ideal to have my old internet back.
Not enough people are flagging those when it aligns with their bias. It's even less likely to get flagged when it's a double whammy of politics and AI. Loosely being about AI should not give it a free pass.
If we don’t police our side nobody will.