As follow-up to the story yesterday about @AP's fake tweet, it has been reported that the hacked message came about an hour after company employees received an expertly-crafted, spear-phishing email.
Spear-phishing is getting harder to detect as successful practices inform future "phishes." What doesn't work is abandoned, reworked and the culprit becomes increasingly less suspicious.
It may come as a surprise or not, but 19% of spear-phishing attempts are successful
. Someone in an organization takes the personalized bait and hands out secure information.
The effects of spear-phishing can be avoided by fact checking. I haven't seen a copy of the message received by AP employees yesterday. It would be interesting to see it and fact check it.
Can anyone find it?
As many articles have already made clear, Americans will react to news that sounds like terrorism.
Today's fake tweet shows how sensitive consumers of information really are.
A hack attack on the Associated Press' Twitter account resulted in
"an erroneous tweet" claiming that two explosions occurred in the White
House and that President Barack Obama was injured. It didn't take long (2 minutes) for Twitter to suspend the @AP
More than 4,000 retweets later, the credibility of the message was dealt a fatal blow when an AP spokesperson told NBC News the news was false.
Like the EKG of a country, the Dow Jones industrial average just after 1 p.m. shows the collective heartbeat (above). More than 140
points was lost in a flash. Five minutes later much of the loss was regained.
According to Bob Sullivan, NBC News
: "It's incredible what a single 12-word lie can do."
How could being an investigative searcher make a breaking lie less effective?
Fact checking the accuracy of the claim is a little trickier in the case of Twitter. Breaking news often comes through this channel before being picked up by major news.
That is probably the clue. AP wouldn't be the first to break the news. Someone on the scene would have said it first; AP would carry it a minute or more later. All one would have to do is look for the source of the AP tweet.
Not being able to find an earlier tweet about this news is the tell-tale sign about its credibility. A good search engine for tweets is topsy.comhttp://topsy.com
. Check it out before you react with your gut.
|Amateur Whale Research Kit?
is credited with Crap Detection 101: How to tell accurate information from inaccurate information, misinformation, and disinformation.
Put your crap detector to work here: http://www.icrwhale.net/products/amateur-whale-research-kit
Some of the usual investigative techniques (backlinks, fact checking) don't work very well. What is it that "tells" you this information, at face value, cannot be trusted?
The price of cyber crime is astounding.
- UK Guardian: Consumers and businesses in the UK lost an estimated £27 billion in 2012 due to cybercrime.[i]
- Ponemon Institute: The average annualized cost of cybercrime for 56 benchmarked U.S. organizations is $8.9 million per year.[ii]
- People’s Public Security University of China: In 2012, economic losses from Internet crimes in China totaled an estimated $46.4 billion (RMB 289 billion).[iii]
And it's growing annually.
So what does being gullible cost the average American?
See if you can find the cost to the average Senior Citizen in the US today.
What does this say about the need to investigate online information?
[i] John Burn-Murdoch, “UK was the world’s most phished country in 2012 – why is it being targeted?”, www.guardian.co.uk, last modified on February 27, 2013, http://www.guardian.co.uk/news/datablog/2013/feb/27/uk-most-phishing-attacks-worldwide.
[ii] “2012 Cost of Cyber Crime Study: United States” Ponemon Institute, October 2012,
[iii] “Internet crimes cost China over $46 billion in 2012, report claims”, thenextweb.com,
last modified January 29, 2013,
Time flies! I've neglected this blog for about 6 weeks.
Dennis O'Connor and I are deep into authoring a book on Teaching Information Fluency. Our deadline is the end of April.
Writing a book is a discovery activity for me. Last time I wrote this much was my dissertation and I discovered plenty about flow and mathematics while doing that.
This time, while it would seem I've traversed the topic of information fluency through this blog and the 21st Century Information Fluency Project website, there are still Aha! moments.
As I was thinking about the process of querying, it occurred to me that there's a lot more to it than translating a natural language question into a query. That's just the visible query--the one that search engine responds to. There's also an invisible query, the one you don't enter into the text box. The keywords or concepts that remain in your head.
These help you filter the results of the query. Some results are more relevant than others, not due to their ranking, but because you have some priorities in mind the search engine is unaware of.
It's generally ineffective to enter everything you're looking for in a search box. It constrains the search and produces fewer results--sometimes none. It's better to submit two or more (keeping it a small number) keywords and scan the results for your invisible query.
Using one of our classic examples, "How many buffalo are there in North America today?", a good query is buffalo north america
(bison is better than buffalo). Yet that's not really enough information to answer the question which is going to be 1) a number and 2) as recent as possible. That's the invisible part that you have to remember throughout the process. You choose results that satisfy 1 and 2; if not, you're probably not answering the question.
One premise of the Filter Bubble
is that the machine is learning from us and will hone its output to our preferences. This becomes a harder task when we are not feeding the machine everything we have in mind. It may be a pretty good way to keep the Filter Bubble from encompassing us.
Think about what you're not querying that you are still looking for next time you search.