When does an editorial stop being an editorial and becomes a full blown ad that intends to deceive you into thinking it has anything to do with valid news? Pretty much as soon as you read the word “Advertorial”. This is a relatively new thing today. Advertising agencies are starting to use this technique to get their intended message, “by my crap” across to those who just don’t look at pop up ads anymore. They’re shifting to this underhanded method of polluting our minds because the old school ways are failing. We just aren’t looking at the banner ads, pop ups, and ad bars across the top and bottom of your screen, anymore.
These old school techniques aren’t working as well to drive revenue and sales. So, now they have to be extra sneaky by making their ads appear to be news articles. Why? Because people will read a news article. If you are good enough at it you can pull the wool over the eyes of a lot of people who would normally see a pop up ad and close it right away out of annoyance. And some of these agencies are so good at advertorials that if I hadn’t heard the piece CBC Radio One ran on it months back I may not see them as the advertisements they really are.
I understand that a company needs to sell their crap. I do. In a way I sell my opinions for attention. I don’t make money off of it and don’t intend to, but you can say that I am looking for a bit of self gain just like they do. The difference is that this article you’re reading right now is an honest attempt to pass on to you important information, I sincerely feel you need to know. This advertorial tries to convince you that your privacy and security as an internet goer is at a higher risk than it really is… just to get into your wallet. I’m up front when I say that I prefer that my articles, stories, comics and artwork are shared – and only because my intent is to entertain you or inform you. I require no money and I certainly won’t shove information down your throat with a spoon full of sugar. You may leave knowing just a little more than you did before you got here. You may leave with a laugh or even upset at my opinion, but the emotions I provoke are based on real ideas.
Wait a second, are not your fears of losing privacy also a very real thing? Yes, but only if it is based on real reasons, presented in a way that is not intended to fool you right out of the gate in order to make a sale. Otherwise the emotion the above advert intends to provoke is not based on anything real.
So, as I always caution, just be careful of what source you are reading. Research before trusting what you read and you won’t feel the need to shell out money for a strawman.
The following is a rewrite of an article that I posted last year. Recent news has led me to edit the original and repost it, because my stance on the issue has not changed. So, I feel it bears repeating:
Apocalypse, a nuclear wasteland. The brittle bones of the masses of humanity litter a dark, future battlefield. A terminator foot slams down and crushes a skull. This is the opening scene from Reese’ dream in Terminator. But, is this even a remote possibility? Are we doomed as a species to create the ultimate war machine that would judge us as inferior, master our world and push us aside? Although the scenario has a non-zero probability, I doubt it is at all likely.
Humans don’t create technology just for the sake of creating it. If this were so, we would have created hybrid and electric cars for mainstream use a long time ago. Such technology just wasn’t as profitable as oil until recently. Self-determinating machines, at least on the level of full artificial intelligence, isn’t profitable. It’s a nice area of research, and some amazing results have been achieved. But there is a reason we don’t have R. Daneel Olivaw (Robots of Dawn, Isaac Asimov) serving us, holding deep conversations with us and able to break any of the three laws of robotics industry uses today.
The fact is machines have always taken a support role, like collision avoidance in automobiles and sensors that open doors for the disabled. We use technology to correct damage to our bodies, like prosthetic limbs and electronic eyes that see. We develop technology to enhance our abilities, like smart drugs that target memory storage and recall in the human brain. The point is that technology development has always been for the purpose of improving the human condition.
That said, there certainly have been developments that did and do the exact opposite, like firearms and the nuclear bomb. But these techs are in the hands of humans, not self-directed machines intent on wiping us out. Don’t get me wrong, I don’t rule out the possibility of a small group of psychotic humans to work on developing such automation. Again, I just find it unlikely. Where ever you get someone with the ability to do this, you have many more someones that can find those people and stop them.
The human predator instinct that drives our desire to take and destroy is balanced by the instinct for survival and continuance. The psychotic among us has always been minimal by comparison to the whole. We have the ability to make the perfect war machine, but we, as yet, don’t trust anything but ourselves to oversee it. This is why Skynet can never exist in reality. The risk would be far too great, as the military leaders of our world would never trust it.
For this reason machines will never be allowed to self direct or make decisions without a human behind them. In a private interview with a Captain in the Canadian Forces Reserves, machines with guns would never happen. This is why we use human directed drones to deliver the killing blow to the enemy and not AI software on the same drones. Such automations remove the human element. Military officials are concerned with keeping down collateral damage and death of innocents and civilians. A machine to them just cannot detect the nuances of behavior as fast or as effectively as a trained human. They simply won’t ever trust them not to kill a child. Although these types of mistakes happen, they don’t happen nearly as much as they could.
Technologist Michio Kaku, and many other highly educated individuals, foresee technology enhancing our abilities. If an apocalypse with technology is going to happen, it will be with mechanically enhanced humans on the battlefield and drones, not robots and Hunter Killers blasting anything with body heat. If anything of that sort does get developed, pushed through the testing phase and placed on the battlefield, it’s active duty would be short lived. Sooner than later these things would rampage through friendly forces and the necessary kill switch on the whole idea would have to be thrown. Artificial Intelligence is still far from a reality, still far from full sentience. The mimicry achieved so far is amazingly convincing I’ll give you that. But we are still seeing these demonstrations on the showroom floor, where the stage is set in such a way as to make such mimicry seem like sentience. We are convinced that AI technology is just around the corner. Yet, as Tay the artificial racist demonstrated, the AI that we think we see is not true AI. It’s certainly artificial, but the “I” half is just not there.
It is one thing to program your neural network to learn, it is quite another to have that same network differentiate between right and wrong, reason and unreason or logic and stupidity. It’s one thing to create a mechanical pack-dog that follows your unit. It is another to put a weapon in its hands and tell it not to kill friendlies, children playing with sticks for imaginary guns, or even defend it from being hacked by the enemy and used against you. If the Pentagon cannot keep secrets from Anonymous, and cannot even prevent their drones from being hijacked electronically, then robots cannot be properly defended either. And this is the very risk that can turn an advanced weapon into an advanced liability, and therefore will never be trusted by the warmongers among us. SkyNet, is an idea of fiction and will be for as far in the future as we can see.
What is going on right now? Well, remember a couple/few posts ago where I mentioned the manual hacking attempt on my not-at-all-important-blog? That is what’s going on right now. It started at 19:04 local time. After two more – one at 19:38 and another just fifteen (+/-) minutes ago at 20:25 – the timing is random again, telling me that this hacker is again trying manually. I wonder if he knows that you can actually get a program from the Dark Web that does this automatically?
Should the right to be forgotten extend to organizations and companies? For example, the issue from 2011, when the University of California police pepper sprayed peaceful protesters, just popped up again. It appears that the university has shelled out hundreds of thousands of dollars to have the internet clean up the tarnishing references to that incident. Should organizations and corporations that do these bad things be allowed to have them forgotten? Many of us already know how horrible Monsanto is. Should Monsanto be allowed to spend huge amounts of money to have their image cleaned up? Or, as I feel, should these things never be forgotten, especially on the internet where people, who have never heard of them before, can then see them fully and uncensored?
To be honest, such a task is strenuous at best, and probably ineffective. You may be able to erase something from parts of the great internet, but you cannot erase it from the minds that have already witnessed such things. In a world where large media is losing its foothold on the truth people are turning more and more to unofficial sources anyway, like blogs and YouTube channels. Can you erase them from a google search? Can you erase them from social media? Facebook, Twitter and many others have become a source to spread awareness of matters like child abuse, terrorism and state persecution of peaceful civilians. Can you erase everyone’s comments, posts and links? Maybe some, but you would be fighting a losing battle.
So, maybe this really isn’t about whether or not non-person entities like universities should be allowed to be forgotten. Perhaps this is more of a question of can they be forgotten. The Washington Post’s own article has brought the issue right back to the first google search page under the news tab. All we have to do is provide a link to a google search for this incident on any popular news comment section and it will be right back in the search results again. This article cannot be erased. Maybe if a hacker is successful they can trash this copy of it, but I have more and can repost it anywhere. The point is the all we have to do is keep talking about it and we’ve undone the University of California’s attempt to monetarily manipulate the internet.
Let me give you a few what-ifs. What if Hitler were alive today and was allowed to erase all internet references to the holocaust? What if proponents of pipelines were allowed to erase the environmental damage caused by the construction and operation of past pipelines? And what if Donald Trump were allowed to erase everything he’s said and done in a couple years and run again for the GOP nomination? Would they be successful? I have to say no. Spend what you want for a cyber memory purge of your misdeeds, but far too many of us will remember what you’ve done. Some of us will make sure others do as well.
Last night One Eyed Lemon suffered 19 hack attempts, and what I mean by suffered is that each IP origin was blocked and OEL was not compromised. Previous attempts followed a simple pattern of 3 attempts per night for a week or two then nothing for a month or so. That was the worst it was, so I took it as OEL simply being part of a group of targets, randomly generated. No big, right? But 19 attempts shows focus and intent. Did I anger someone out there? I suppose it’s entirely possible, given that OEL posts morally sound viewpoints with reason and logic, and some people just can’t handle the truth. It’s also possible that someone simply saw One Eyed Lemon and felt it was nice to try to hack it. In either case, whoever it is, the person behind the focused attempt to hijack OEL is an amateur, someone who just discovered the world of hacking.
Looking at the times of each attempt there is no pattern in IP addresses, number of tries or timing for each attempt, which tells me the person was trying manually… you know, instead of using an automated process. Such automated processes would show a regular pattern in the timing of each attempt. Perhaps this person thinks they’re a professional hacker, maybe even claiming to be Anon, now that they have some of the same tools. With 19 attempts at 1 to 3 tries each, that is 57 single attempts to log into my admin account on OEL over the course of 37 minutes. Does that show dedication and, therefore anger? I don’t know, I’m not a hacker and never claimed to be all that knowledgeable about the topic. But when I envision sitting at my computer manually setting up each attempt, to me that is a kind of dedication. Yet, there really is no way to know if that dedication is the result of being pissed at OEL for some personal reason or if it’s simply a game. If it is anger, then… well, grow up and realize people will have differing opinions. However, if it is simply a game, I wonder how long it will take them to be as proficient at it as they are at GTA 5 or whatever that’s at now.
… and happy hacking. 😉