Skip to content

Author: Charles Givre

Can you use Machine Learning to detect Fake News?

Someone recently asked me for assistance with a university project whereby they were asked to predict whether a given article was fake news or not.  They had a target accuracy of 70%.  Since the topic of fake news has been in the news a lot, it made me think about how I would approach this problem and whether it is even possible to use machine learning to identify fake news.  At first glance, this problem might be comparable to spam detection, however the problem is actually much more complicated.  In an article on The VergeDean Pomerleau of Carnegie Mellon University states:

“We actually started out with a more ambitious goal of creating a system that could answer the question ‘Is this fake news, yes or no?’ We quickly realized machine learning just wasn’t up to the task.” 

Leave a Comment

Drilling Security Data

Last Friday, the Apache Drill released Drill version 1.14 which has a few significant features (plus a few that are really cool!) that will enable you to use Drill for analyzing security data.  Drill 1.14 introduced:

  • A logRegex reader which enables Drill to read anything you can describe with a Regex
  • An image metadata reader, which enables you to query images
  • A suite a of GIS functionality
  • A collection of phonetic and string distance functions which can be used for approximate string matching.  

These suite of functionality really expands what is possible with Drill, and makes analysis of many different types of data possible.  This brief tutorial will walk you through how to configure Apache Drill to query log files, or any file really that can be matched with a regex.

1 Comment

Book Review: Technically Wrong

I recently completed Technically Wrong by Sara Wachter-Boettcher.  Let me start by saying that I’m glad that Ms. Wachter-Boettcher wrote this book. The tech industry has a lot of issues which need to be brought out into the open and it is definitely a positive development that people such as Ms. Wachter-Boettcher are bringing these issues to the forefront.  It really is only recently that people are discussing the continuous erosion of privacy, misogyny in the tech industry, lack of diversity and many other issues. Whilst I would not deny any of these issues, I felt Wachter-Boettcher’s analysis was somewhat lacking and didn’t really get at the realities of working in the tech industry.  Wachter-Boettcher cites numerous examples of tech gone wrong, such as a smart scale telling a two year old that he needs to lose weight, FaceBook denying a Native American person an account because it felt that their name was not legitimate, and the abhorrent use of proprietary, black box algorithms to make parole recommendations.

Again, it is definitely a positive development that Wachter-Boettcher and others are writing about these issues, but the alternatives and solutions she proposes seem a bit simplistic.   While she doesn’t state this directly, much of the book seems to suggest that all of technology’s woes are caused by the lack of diversity in the tech industry.  Specifically that “white guys” from elite universities are running everything.  I don’t have an electronic copy of the book, but after about half way through this, I wanted to count the number of times the phrase “white guys” appears in the book.  Sometimes this phrase includes Asians, sometimes not.

Leave a Comment

Apple’s Newly Declared War on Data Collection (and Facebook?)

In the last week, beneath all the Trump and Kim Jong Un reporting, were several stories that state that Apple has in effect declared war on data collectors.  Make no mistake, what Apple is doing is making it significantly harder for companies big and small to collect your personal data.  The significance of this cannot be overstated in that many companies like Google and Facebook’s revenue is based on selling targeted advertising and if gathering this data becomes significantly more difficult, it could affect their bottom lines.

The First Volley:  No More Comments and Share Buttons

Last week, I was listening to the keynotes at the WWDC, and overall was pretty unimpressed as exec after exec droned on about new animojis or some other feature that I really didn’t care about, and then, Craig Federighi launched the first volley: Safari is going to block FaceBook and other social media like and share buttons as well as shared comment sections.  Facebook, Twitter and other sites use these buttons to track your activity when you are visiting other sites.  While it isn’t that big of a deal that this is happening on MacOS, it is VERY significant that Apple is instituting this change on iOS as well.  When I heard this, I was pretty shocked, but that was only the first volley, there were more to come.

1 Comment

Adventures and Misadventures in Data Science Interviews

I’ve been waiting for some time to publish this, but I wanted to write about my experiences interviewing for data science jobs. Here’s my story, I worked at Booz Allen for nearly seven years but I felt it was time for a change. I very much like Booz Allen as a company and if anyone is interested in working there, please don’t hesitate to contact me.  But I felt I was ready for different challenges and started looking for work elsewhere.

Now that I started a new position, I thought I’d share some observations about what I learned from interviewing at numerous companies. I wasn’t tracking how many companies I interviewed with, but it was a lot. I have a lot of government experience and got a number of offers from government contracting firms. However, I came to the conclusion that in terms of career progression, joining another government contracting firm was not what I was looking for.

So here’s what I learned…

1 Comment

My Ideal Workspace

As more and more research is showing that the open office design actually reduces productivity (here) and (here), I recently shared a post on LinkedIn about how github “de-broed” their workspace, but I thought I’d share my thoughts on what I like, and don’t like in a work space.  Above is a picture of my home office with some labels.  Not specifically labeled is that there is plenty of natural light.  One of the most depressing places I ever worked was a windowless cube farm where the developers liked to leave the lights off.  I was going out of my mind!!

  1. A Door:  My ideal workspace has a door so that when privacy is needed, I can close the door and when it is not, I can open it.
  2. A clock:  I know computers have clocks, but having a big visible clock is really helpful for making sure things run on time.
  3. A comfortable chair, with foot rest:  If I’m doing tech work for a long time, I don’t want to be sitting on something that will cause trips to the chiropractor.
  4. Big Monitors:  I’m a big fan of multiple, large monitors, as they really increase productivity.
  5. Music:  I like to listen to music, especially when coding.  When I’m working in more public spaces, I have headphones…
  6. Stress Relief:  I play trombone and when things get stressful, one can always reduce some stress by playing some Die Walkure …. LOUDLY.
  7. Lots of Geek Books:  Nothing sets the stage for coding than being surrounded by O’Reilly geek books.
  8. Family Photos or other Personal Items:  I do my best work in a space that feels like my own, so I think it is important that people can have a space with some of their personal items that feels like their own.   Hence… I’m not a fan of hoteling or workspaces that set people up to work on large tables.

What do you like in a work space?

Leave a Comment

Book Review: Automating Inequality

I recently read Automating Inequality by Virginia Eubanks and would like to share some thoughts.  This review is the first of several book reviews I’ve been working on about books relating to the problems which are emerging from technology. I’ll keep this brief…

The Good:

I am glad that the conversation about social problems caused by technology is expanding.  Books like Automating Inequality are good contributors to that discussion.  In this book, Eubanks highlights a few situations where technology has negatively affected people’s lives, primarily poor people.  This technology also serves to limit poor people’s lives and opportunities, creating what she refers to as a digital poorhouse.

The use of machine learning can be a powerful tool for developing predictive analytics to  One abuse which I found particularly troubling was cited on pg. 137 which is a risk model which calculates a risk score for unborn children.

Vaithinathan’s team developed a predictive model using 132 variables–including length of time on public benefits, past involvement with the child welfare system, mother’s age, whether or not the child was born to a single parent, mental health, and correctional history–to rate the maltreatment risk of children in MSD’s historical data.  They found that their algorithm could predict with “fair, approaching good” accuracy whether these children woudl have a “substantiated finding of maltreatment” by the time they turn five.

 

What I Found Lacking:

What I found lacking in Automating Inequality was the lack of alternative proposals.   It is easy to criticize a technical solution, but these systems are often deployed against complex problems and finding a solution often requires a lot of vigilance, persistence and iteration.  Eubanks discusses the issue of welfare abuse, and seems to downplay the fact that welfare fraud is in fact a major issue in this country.  With some basic research on Google you can unfortunately find countless cases of individuals convicted of welfare fraud.  Clearly, welfare programs should make efforts to reduce fraud and make sure that their resources are going to people who truly need the assistance.

What Eubanks seemed to miss was what went wrong in the implementations that she highlighted.   In two cases, Eubanks highlighted several systems designed to improve the efficiency and efficacy of welfare programs.  From the book, it sounded as if the designers of these programs implemented various technical systems to automate the intake process for benefits.  What didn’t happen, and what Eubanks didn’t discuss in the book, was what was missing in these programs: continuous improvement.  The government agencies that implemented these programs took the approach that one would take when one is building a bridge or tunnel: get it done and once its done, move on to the next project.  This doesn’t work for information systems because they are never done.  Once you start using them, there will always be faults and opportunities to improve.  If an organization can rapidly iterate and improve the solution over time, they will end up with an effective solution.

Eubanks ends the book with a proposed code of ethics for data scientists and other technologists.  I wrote my own code of ethics for data scientists, and it is always interesting to me what others write on the subject.   I particularly liked these points from Eubanks’ Code of Ethics

  • I will not collect data for data’s sake, nor keep it just because I can
  • When informed consent and design convenience come into conflict, informed consent will always prevail.  (If only it were so… )

Overall, I found the book to be quite thought provoking, but I did disagree with some of the conclusions.

Leave a Comment

Why more women don’t code: A heartbreaking story with a good ending

I’ve been reading a lot lately about the ills of the tech industry, with a few book reviews in my queue to finish, and I posted a question on LinkedIn about what inspired people to get into tech.  My motivation was to see if there was a difference in men and women.  My hypothesis is that there are societal and cultural factors which discourage girls and women from studying tech (math, Computer science, engineering etc) and hence there aren’t enough qualified women to fill the tech jobs, and ultimately we end up with the current state of affairs where men outnumber women 3 or 4 to 1 in most tech companies.

Anyway, I received the following private response from a former student to whom I shall refer as S.  S was a student in one of my recent classes and a delight to work with.  Her story is absolutely heartbreaking and needs to be heard.  I lightly edited it, only to remove some details which would identify her.

You can share my story but I am not ready to have my name on it. I am going to be looking for a job soon and not everyone will appreciate it. They will see me as slow and too old. I was 54 before I gave myself to permission to study tech and coding languages. Growing up, girls were not encouraged to study math. I was teased about my abilities in math, because I could not recite the times tables. I grew up believing that I couldn’t do math. At home, my brother received an early TI calculator. It was supposed to be shared between us but that didn’t happen. Besides being annoying, it was clear that electronics were not for girls.

I began my university studies in psychology which seemed the only science that did not require math. I was bored and I tried to learn math on my own. Actual classes involved grades and that was disastrous. I somehow passed all the mathematics prerequisites and ended up in graduate school for chemistry. During my quantum mechanics class, I struggled. I went for help from the professor. He realized that I couldn’t do the times tables verbally and completely humiliated me.

At my job, I learned about agile, innovation and human centered design. I loved that these ideas as they provided a framework and a fresh vocabulary to talk about science and problem solving instead of just math. I excelled at facilitating these techniques. Many of the prototypes we needed to wireframe involved a website or an application. I became curious about data and technology, but I would never let myself work in this area. The risk of humiliation was too great. My supervisor already realized that I stumbled verbally with numbers. I did not want to be in a position to lose my job while trying out new skills.

About the same time, I had a routine hearing evaluation and I was diagnosed with great hearing but a serious auditory processing disorder. The audiologist predicted that I probably had terrible problems with spoken arithmetic and verbal math. I was thunderstruck. How could he know that? I had been punished as a child for exactly this issue. I internalized it as part of my self image. Although I was a great reader, I was unable to recite the times tables and do my arithmetic. I couldn’t explain why, maybe I really was a bad kid. After digesting the audiologist’s report, I allowed myself to become more interested in data and technology.

I fight paralyzing “imposter syndrome” every time I sit in front of my computer. I began to take free classes on-line and go to meet-ups and learn even, when I couldn’t talk about it well. I joined groups for women who code.  I continue to learn and I just signed up for an intensive software engineering boot camp. I currently volunteer as a teaching assistant for introductory python at two different community women’s coding groups. I continue to attend meet-ups.  I am not yet where I want to be but I am finally allowed to move ahead. Data is going to change our world and I don’t want to miss out.

Leave a Comment

I took the #DeleteFacebook Challenge

In the last weeks, Facebook has been in the news a lot for its aggressive data gathering.  What has surprised me, is not that Facebook is in the news, but that it hasn’t happened much sooner.  Facebook is possibly the most invasive data gathering, privacy invading platform the world has ever seen, despite the fact that it is cloaked behind a veil of childish logos and thumbs up buttons.  Additionally, Facebook has engaged in some truly abhorrent practices, such as gathering text messaging and phone metadata from Android usersconducting secret psychological tests on over 700,000 users in 2012, ad programs that track users’ web activity off of Facebook, to say nothing of how Facebook was and most likely is being used to propagate fake news.

As someone who has worked in various regulated industries (banking, government) it appalls me how companies like Facebook abuse their users’ privacy.  My biggest issue is that Facebook disguises its data gathering efforts under a slick veneer of innocence which disguises their true intent.  Much like tobacco adverts of yore, Facebook and its “family” are targeted primarily towards younger people who don’t understand what they are giving up in exchange for the privilege of sharing their photos with their friends.

An extremely egregious example of this occurs on election days in the US.  Facebook will ask users a question: “Did you vote today?” and give you a little sticker on your profile if you answer that you did.  Now why do you think they would do that?  To encourage people to vote?  Hardly, though that may be a side benefit.  No, the real reason they do this is to gather information about people’s voting history, which Facebook then uses in their targeted political campaigns.  Don’t believe me?  You can read about it here: https://politics.fb.com.

The problem here is that Facebook doesn’t ask their users for consent in a way that a typical user will understand.  I am not trying to mock Facebook users, but most people who don’t work in data analytics, don’t really understand the implications of mass data gathering.  The image above is how Facebook Messenger asks for permission to gain access to your contacts, SMS and phone call logs. (Courtesy of ArsTechnica)  Nowhere in this image does it say anything about collecting SMS, phone logs or anything for that matter.  It looks cute and most people wouldn’t think twice about clicking on ok.

Silicon Valley’s Culture Needs to Change

The biggest issue I have with some of what Facebook has been caught doing is that enough of the company felt it was acceptable for them to do it.   That’s the bigger issue here.  Most likely, some manager at Facebook decided, why don’t we gather all our Android users’ text data and mine it!  And nobody said a bloody thing. No leaks to the news media, no disgruntled employees writing blog posts about it, nothing….  Which ultimately means that everyone involved felt it was totally acceptable to take their users’ SMS and phone logs.   This practice only ended when Android disabled the functionality, so it wasn’t as if Facebook execs had some crisis of conscious.

But, I’m a realist.  Facebook’s revenue is generated by selling targeted advertising and the way it targets its ads is by gathering data about its audience.  Whilst Mr. Zuckerberg can write pithy non-apologies about it, nothing will change because this is how Facebook makes money.  The only way this changes, is for people like you to get off of Facebook (and Instagram, and WhatsApp) in significant numbers and for advertisers to stop spending money on Facebook ads. As long as there is a market for this data, the sad reality is that there will be more and more companies trying to invade your privacy and sell it to the highest bidder.

Educate Yourself About How Companies Monetize Your Data

You need to understand how companies are using your data and make a conscious choice about whether that company provides enough value to justify that loss in privacy.  Frankly, this is why I prefer using companies whose primary revenue stream is not derived from data monetization.  This is why I choose to use iPhones instead of Android, iMessage instead of WhatsApp, socializing with real friends instead of Facebook.  You can generally tell this is the case by whether you have to pay for a service.  Generally speaking, companies which charge for their services are not looking to invade your privacy to the same degree as companies that offer their services “for free”.  As the saying goes: “If you aren’t paying for it, YOU are the product.

Leave a Comment

A New Threat: Stalkerware

What would you do if you attended a political event or protest and the next day, you receive targeted adverts for that political cause?  Would that be cause for concern?  After all, you don’t post about your political views, how did the advertisers know?  You didn’t sign any rosters or register, so how did they know you were there?

I recently became aware of a new category of computer-evil: stalkerware.  I thought I was being clever and would have the privilege of coining a new term, but a few other people have already coined the term.  However, I would like to propose a slightly different definition.  In an article originally appearing on Motherboard, stalkerware is defined as:

Stalkerware is defined as invasive applications running on computers and smartphones that basically send every bit of information about you to another person. This covers the gamut from programs that can be purchased online to give third parties access to basically everything on your computer from photos, text messages and emails to individual keystrokes, to apps that activate your Mac’s webcam without your knowledge.

I’m not really seeing the difference between this definition and “traditional” spyware, but stalkerware as I define it is:

Software that automatically reports your location on a regular basis without your knowledge or consent.

The stalkerware that Motherboard writes about are dedicated programs or apps that someone deliberately installs on a target’s mobile device in order to track their activity for whatever reason.  Stalkerware as I define it is a little different, in that it is not targeted at one individual.  These are applications that are installed on mobile devices that track your every move–literally stalking you–most likely without your knowledge.

5 Comments