Friday, November 28, 2014
Friday, November 21, 2014
Know yer Hacker
First things first… it needs to be (abundantly clear) that hackers aren’t inherently bad… the word “hacker” doesn’t mean “criminal” or “bad guy.” (All Hackers are not Anonymous and all Anonymous are not hackers.) Geeks and tech writers often refer to “black hat,” “white hat,” and “gray hat” hackers. These terms define different groups of hackers based on their behavior. The (contemporary) definition of the word “hacker” is controversial, and could mean either someone who compromises computer security or a skilled developer in the free software or open-source movements.
Black-hat hackers, or simply “black hats,” are the type of hacker the popular media seems to focus on. Black-hat hackers violate computer security for personal gain (such as stealing credit card numbers or harvesting personal data for sale to identity thieves) or for pure maliciousness (such as creating a botnet and using that botnet to perform DDOS attacks against websites they don’t like).
Black hats fit the widely-held stereotype that hackers are criminals performing illegal activities for personal gain and attacking others. They’re the alleged computer criminals. A black-hat hacker who finds a new, “zero-day” security vulnerability would sell it to criminal organizations on the black market or use it to compromise computer systems.
White-hat hackers are the opposite of the black-hat hackers. They’re the alleged “ethical hackers,” experts in compromising computer security systems who use their abilities for good, ethical, and legal purposes rather than bad, unethical, and criminal purposes.
For example, many white-hat hackers are employed to test an organizations’ computer security systems. The organization authorizes the white-hat hacker to attempt to compromise their systems. The white-hat hacker uses their knowledge of computer security systems to compromise the organization’s systems, just as a black hat hacker would. However, instead of using their access to steal from the organization or vandalize its systems, the white-hat hacker reports back to the organization and informs them of how they gained access, allowing the organization to improve their defenses. This is known as “penetration testing,” and it’s one example of an activity performed by white-hat hackers.
A white-hat hacker who finds a security vulnerability would disclose it to the developer, allowing them to patch their product and improve its security before it’s compromised. Various organizations pay “bounties” or award prizes for revealing such discovered vulnerabilities, compensating white-hats for their work.
Very few things in life are clear black-and-white categories. In reality, there’s often a gray area. A gray-hat hacker falls somewhere between a black hat and a white hat. A gray hat doesn’t work for their own personal gain or to cause carnage, but they may technically commit crimes and do arguably unethical things.
For example, a black hat hacker would compromise a computer system without permission, stealing the data inside for their own personal gain or vandalizing the system. A white-hat hacker would ask for permission before testing the system’s security and alert the organization after compromising it. A gray-hat hacker might attempt to compromise a computer system without permission, informing the organization after the fact and allowing them to fix the problem. While the gray-hat hacker didn’t use their access for bad purposes, they compromised a security system without permission, which is illegal.
If a gray-hat hacker discovers a security flaw in a piece of software or on a website, they may disclose the flaw publically instead of privately disclosing the flaw to the organization and giving them time to fix it. They wouldn’t take advantage of the flaw for their own personal gain — that would be black-hat behavior — but the public disclosure could cause carnage as black-hat hackers tried to take advantage of the flaw before it was fixed.
“Black hat,” “white hat,” and “gray hat” can also refer to behavior. For example, if someone says “that seems a bit black hat,” that means that the action in question seems unethical.
Editor’s note: At the end of the day… like all things of this Spaceship Earth… Perceived Reality is an ever-changing mosaic of shades of grey. Now the pendulum swings toward the black… now it trends toward the white. Hackers, for good or ill, are the one’s who have “gotten a clue” as to the technological workings of our modern world. That doesn’t mean that they have any better ethical grip… they simply know how to control the artifacts.
Original article by Chris Hoffman
Chris Hoffman is a technology writer and all-around computer geek. He's as at home using the Linux terminal as he is digging into the Windows registry. Connect with him on Google+.
Monday, November 17, 2014
Friday, November 14, 2014
"The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person?"
- Jaron Lanier
A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn't been asked to decide, and declare that corporations are people. That's a cover for making it easier for big money to have an influence in politics. But there's another angle to it, which I don't think has been considered as much: the tech companies, which are becoming the most profitable, the fastest rising, the richest companies, with the most cash on hand, are essentially people for a different reason than that. They might be people because the Supreme Court said so, but they're essentially algorithms.
If you look at a company like Google or Amazon and many others, they do a little bit of device manufacture, but the only reason they do is to create a channel between people and algorithms. And the algorithms run on these big cloud computer facilities.
The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.
The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us.
That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."
In the past, all kinds of different figures have proposed that this kind of thing will happen, using different terminology. Some of them like the idea of the computers taking over, and some of them don't. What I'd like to do here today is propose that the whole basis of the conversation is itself askew, and confuses us, and does real harm to society and to our skills as engineers and scientists.
A good starting point might be the latest round of anxiety about artificial intelligence, which has been stoked by some figures who I respect tremendously, including Stephen Hawking and Elon Musk. And the reason it's an interesting starting point is that it's one entry point into a knot of issues that can be understood in a lot of different ways, but it might be the right entry point for the moment, because it's the one that's resonating with people.
The usual sequence of thoughts you have here is something like: "so-and-so," who's a well-respected expert, is concerned that the machines will become smart, they'll take over, they'll destroy us, something terrible will happen. They're an existential threat, whatever scary language there is. My feeling about that is it's a kind of a non-optimal, silly way of expressing anxiety about where technology is going. The particular thing about it that isn't optimal is the way it talks about an end of human agency.
But it's a call for increased human agency, so in that sense maybe it's functional, but I want to go little deeper in it by proposing that the biggest threat of AI is probably the one that's due to AI not actually existing, to the idea being a fraud, or at least such a poorly constructed idea that it's phony. In other words, what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.
What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field. Now, if we talk about the particular technical challenges that AI researchers might be interested in, we end up with something that sounds a little duller and makes a lot more sense.
For instance, we can talk about pattern classification. Can you get programs that recognize faces, that sort of thing? And that's a field where I've been active. I was the chief scientist of the company Google bought that got them into that particular game some time ago. And I love that stuff. It's a wonderful field, and it's been wonderfully useful.
But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous ... when you do all of that, you create a series of negative consequences that undermine engineering practice, and also undermine scientific method, and also undermine the economy.
The problem I see isn't so much with the particular techniques, which I find fascinating and useful, and am very positive about, and should be explored more and developed, but the mythology around them which is destructive. I'm going to go through a couple of layers of how the mythology does harm...
Jaron Lanier is a computer scientist, author, and composer. He is one of most celebrated technology writers in the world, and is known for charting a humanistic approach to technology appreciation and criticism. He was awarded the Peace Prize of the German Book Trade in 2014. His book "Who Owns the Future?" won Harvard's Goldsmith Book Prize in 2014.
Wednesday, November 5, 2014
Remember, Remember... the 5th of November...
V for Vendetta. Before the phrase was made infamous in the 2006 Wachowski brothers film, it referred to Bonfire Night, celebrated in the UK. Here is a pertinent snippet from an earlier post...
"Guy Fawkes Night or Bonfire Night is an annual celebration on the evening of the 5th of November. It marks the downfall of the Gunpowder Plot of 5 November 1605, in which a number of Catholic conspirators, including Guy Fawkes, attempted to blow up the Houses of Parliament, in London, United Kingdom. It is primarily marked in the United Kingdom where it was compulsory, by fiat, until 1859, to celebrate the deliverance of the King of Great Britain.
This day was also celebrated in the Colonies and was called "Pope's Day". It was the high point of 'anti-popery' (in the term of the times) in New England. In the 1730's or earlier Boston's artisans commemorated the day with a parade and performances which mocked Catholicism and the Catholic Stuart pretender. It was also the day when the youth and the lower class ruled. They went door to door collecting money from the affluent to finance feasting and drinking."
It seems fitting to remember the 5th of November... the day after election day... here in the US. The American media has been gorging on political punditry for several months... Although this spectacle was billed as a second-rate event... dismissed as mid-term elections... it has dominated the airwaves, none the less. Alas, poor Babylon.
Apparently, those ravenous, rascally republicans have "swept" the mid-terms... taking control of the US Senate, while retaining control of the US House of Representatives. They zealously did this, foaming at the jowls, to save the American public from the Affordable Care Act... from an obviously un-secure border and from "he who must not be xenophobically named" ...President Barack Obama. In other words, "We the People" need to be saved from ourselves. After all, if the government doesn't trust the people, they should dissolve them and elect a new people... Alas poor Babylon.
Strange days have found us... Oregon joins Washington State and Colorado in legalizing Pot, while it remains an illegal and controlled substance in the eyes of the Feds. That seems a little schizophrenic to me... The Federal government should make up it's mind on the legalization of Marijuana. As for Washington State, Colorado and now Oregon... the people have spoken.
So now the Oregon State government, under the authority of the Oregon Liquor Control Commission, will control and tax Ganja... Alas, poor Babylon.