• Welcome to the Internet Infidels Discussion Board.

Assassination Attempt On Donald Trump

One of the replies on that Facebook chain is the comparison of Trump to what is, if I recall, an actual Hitler pic after an assassination attempt.

I haven't posted it because I wonder about img2img faking in forced similarities, but if we could see the originals...

Well, if it is entirely as it seems, it is another argument for "staged".

This is a guy whose regular claim to fame was staged over-the-top violence sometimes involving real, self-inflicted wounds.

I am not going to say that I think this is a photo of that Habsburg jaw having mother fucker next to Donald Trump. The hair looks wrong and that chin simply doesn't chin.
 
I am not going to say that I think this is a photo of that Habsburg jaw having mother fucker next to Donald Trump. The hair looks wrong and that chin simply doesn't chin.
Agreed.
 
Let's say he did stage it. So what? Doesn't change shit. Oh, and Thomas Matthew Crooks isn't the only white dude in the world who looks like that.
 
Let's say he did stage it. So what? Doesn't change shit. Oh, and Thomas Matthew Crooks isn't the only white dude in the world who looks like that.
Well, the one thing it would indicate is that people are even more fools for following him, that he is even more evil than many people realize...

But it's all about scale... Compared to the rest of it, this, which should be damning for anyone all on its own, is somehow dwarfed by all the other evil he is committing to.

Any one of a massive pile of despicable acts should be enough, and we have a whole pile of them, and somehow no consequences are coming to him.
 
Folks, its not that hard to get to the bottom of this:

Fake Photo of Trump, Gunman Thomas Crooks Planning Assassination Attempt Generated by AI

The guy has six fingers on his right hand, which is a pretty good indicator of A.I. Not long ago, I saw the image of an attractive woman in an ad that, TBH, got my blood pressure spiking. I was about 99% certain it was AI, and when I saw her six fingers on one hand, that made it 100%. Still hot though. Now that I think about it, that extra finger could come in handy. :dancy:
 
Last edited:
In years to come, sorting fact from fiction may become a full time endeavor. Or we just ignore news feeds.
No, this won't be tolerated for overly long.

There are technologies which solve this problem in parts, and I've discussed this before, involving public key encryption.

Using math, the claimant would sign or validate an image with their public certificate as "real".

This could be done at various times, by various more or less trustworthy processes, individuals, and/or organizations.

Anything from a story with witness statements and so on, with official sources having signed certificates from official issuing authorities.

The problem with this is that any system proposed from any corporate or government interest cannot actually be trusted, because while this can and should be entirely possible to run mostly anonymously, that wouldn't be a design feature.

Even things like phones today don't feature the ability for a user to replace non-anonymous device-specific signing certificates with self-signed certificate and to register those to an anonymous user account; the hurdle here being the conflict of interest between user safety and "big brother" power that they could abuse if the system is left strategically broken.

If tech was designed properly, for instance, the government wouldn't be able to ask Apple to unlock a phone because Apple wouldn't have been the source of the private unlocker key in the first place, nor to fake a signature because they would never have access to the private key because only the user has the only copy of it.
 
Some people defend AI despite the obvious flaws and dangers much as Christians defend the bible despite the obvious flaws

The ability to use AI to create fake video and images in only a few minutes has been widely demonstrated in the news.

Digital currency was supposed to prevent criminals from hiding financial transactions, it did not work.

Despite decades of hacking and improvements in computer security hacking still occurs.

The problem is trying to bound bound complexity as complexity grows. Beyond a point it is too big for us humans to manage.

Look at aviation and the continual problems that occur.
 
The ability to use AI to create fake video and images in only a few minutes has been widely demonstrated in the news.
And yet AI CANNOT create signatures from trustworthy entities on the stake of their own reputations, which says this isn't an AI problem but a problem owing to the utter disregard for all the warnings of people who said we needed a mechanism of assurance and validation when it was just "Photoshop" and not the dreaded "ArTiFiCiAL iNtElLiGeNcE".
 
So the guy has six fingers on his right hand. It happens!
But seriously - this put-up is obviously not evidence of anything.
But I still estimate the odds of there NOT having been any funny business with that kid "missing" from that range, at less than 50-50.
The miracle ear is its own evidence.
 
Some people defend AI despite the obvious flaws and dangers much as Christians defend the bible despite the obvious flaws
I am not seeing an analogy here. AI is not a holy text, and doesn't tell us how to behave.
The ability to use AI to create fake video and images in only a few minutes has been widely demonstrated in the news.
You should pay a lot less attention to what is these days called "news"; It is now almost entirely designed to cause fear and/or outrage, and it's very effective at that.
Digital currency was supposed to prevent criminals from hiding financial transactions, it did not work.
No, it wasn't. Quite the opposite. And it is working worryingly well.
Despite decades of hacking and improvements in computer security hacking still occurs.
And always will, as anyone who understood the first thing about it would be aware.
The problem is trying to bound bound complexity as complexity grows. Beyond a point it is too big for us humans to manage.
We reached that point about 8,000 years ago. It's not a problem; We don't need to manage it all - just our little bit.

Failure to grasp this is a major cause of unhappiness (and another very good reason why you should pay a lot less attention to what is these days called "news").
Look at aviation and the continual problems that occur.
Commercial aviation is one of the safest and most reliable endeavours in human history.

You should definitely pay a lot less attention to what is these days called "news".
 
In years to come, sorting fact from fiction may become a full time endeavor. Or we just ignore news feeds.
No, this won't be tolerated for overly long.

There are technologies which solve this problem in parts, and I've discussed this before, involving public key encryption.

Using math, the claimant would sign or validate an image with their public certificate as "real".

This could be done at various times, by various more or less trustworthy processes, individuals, and/or organizations.

Anything from a story with witness statements and so on, with official sources having signed certificates from official issuing authorities.

The problem with this is that any system proposed from any corporate or government interest cannot actually be trusted, because while this can and should be entirely possible to run mostly anonymously, that wouldn't be a design feature.

Even things like phones today don't feature the ability for a user to replace non-anonymous device-specific signing certificates with self-signed certificate and to register those to an anonymous user account; the hurdle here being the conflict of interest between user safety and "big brother" power that they could abuse if the system is left strategically broken.

If tech was designed properly, for instance, the government wouldn't be able to ask Apple to unlock a phone because Apple wouldn't have been the source of the private unlocker key in the first place, nor to fake a signature because they would never have access to the private key because only the user has the only copy of it.

How do you stop people who have an agenda from using AI to promote their products, ideologies or beliefs?
 
In years to come, sorting fact from fiction may become a full time endeavor. Or we just ignore news feeds.
No, this won't be tolerated for overly long.

There are technologies which solve this problem in parts, and I've discussed this before, involving public key encryption.

Using math, the claimant would sign or validate an image with their public certificate as "real".

This could be done at various times, by various more or less trustworthy processes, individuals, and/or organizations.

Anything from a story with witness statements and so on, with official sources having signed certificates from official issuing authorities.

The problem with this is that any system proposed from any corporate or government interest cannot actually be trusted, because while this can and should be entirely possible to run mostly anonymously, that wouldn't be a design feature.

Even things like phones today don't feature the ability for a user to replace non-anonymous device-specific signing certificates with self-signed certificate and to register those to an anonymous user account; the hurdle here being the conflict of interest between user safety and "big brother" power that they could abuse if the system is left strategically broken.

If tech was designed properly, for instance, the government wouldn't be able to ask Apple to unlock a phone because Apple wouldn't have been the source of the private unlocker key in the first place, nor to fake a signature because they would never have access to the private key because only the user has the only copy of it.

How do you stop people who have an agenda from using AI to promote their products, ideologies or beliefs?

How do you stop people from using Photoshop?

How do you stop people from using the actual footage of celebrities with questionable morals?

How do you stop people with government backing and a number of fake research mills spinning up plausible looking bullshit?

The first way I know how is to do exactly what I said: force those who wish to participate to have good standing among their peers.

If someone is not willing to sign and back a message with the weight of trust in their organization, then that's an indicator that they aren't trustworthy in the first place.
 
Saw this post on FB and thought it was funny.

ChatGPT.jpg

I'm pretty sure we don't have any data on how effective ChatGPT is at analyzing assassination attempts, and secondly I'm pretty sure ChatGPT has never been trained on analyzing assassination attempts.
 
Back
Top Bottom