Skip to content

Category: Data Science

Fixing STEM Education

To both of my loyal readers, I apologize for not writing anything in a while, but I have been absolutely slammed with classes and conference presentations.  Anyway, I’ve been doing a lot of thinking about my earlier post about Teaching Data Science in English.   The post provoked a decent response, mostly positive.

One reader sent me the following comment about my post which I’ve decided to quote (with permission) in its entirety because I think it accurately reflects why people get so frustrated when they try to learn mathematical concepts. What interested me was that this individual took action and “translated from mathspeak to English” and all of a sudden she was able to understand the underlying concepts.

Awhile ago I read a piece you had written on LinkedIn about making ‘mathspeak’ and ‘techspeak’ (i.e. coding) more accessible to regular people, by decreasing mathematical notation usage and increasing the use of real words in explanations of formulas and concepts. It was something that stayed with me because I’ve always understood broader mathematical concepts but have always had trouble with the mechanics, and I think a lot of that has had to do with the amount of notation used…math seems like a foreign language sometimes, and there are 2 levels of understanding: the first is merely deciphering the ‘foreign language’, which already puts me out of my comfort zone (think reading Spanish or French if you are a native English speaker) and then understand the underlying concepts, which becomes harder due to the fact that it’s written in a ‘non-native’ language. Recently I’ve started taking an online course in machine learning on XXX. Already in the second lesson, he dove straight into notation-filled formulas, and I was starting to get that overwhelmed feeling that I’m familiar with from previous years of math. But I had what you wrote in my mind, and I thought I’d give it a shot and manually ‘translate’ the formulas and equations into English, and stick with that. Well, I did that, and it worked so well. I feel that I am able to follow along with the underlying theory of the class and by extension, the formulas and algorithms he presented in ‘mathese’ whereas before I would have shut-down and assumed it was beyond my grasp. Thanks so much for highlighting this aspect of the math/English understanding divide. It is continuously helpful for me. (Emphasis mine)

I’d like to share another related story.  One of my first paying jobs was working for KUAT public television as the web developer (www.kuat.org) and I wanted to do some things that required automating a data flow from an archaic DOS based database.  I was teamed up with a programmer who helped me build the process and in doing so, I learned how to write regular expressions.  I got so into it, I nearly automated myself out of a job.

Fast forward a year or so, when I was nearly done with my CS degree, I had to take an upper level CS course about Automata, Grammars and Languages, which included regular expressions in the course description.  I was pretty excited because by this point, I had become a master at regular expressions and was looking forward to a class that I knew some of the material going in.  Boy was I in for a shock.  When we got to the regular expressions section, it degenerated into a plethora of Greek letters and assorted jargon to the point where I truly loathed going to class.

Theory Should Not Be Taught at the Expense of Application

What I also realized in that CS class was that most of my fellow students may have passed the tests, they did not have any clue how to use regular expressions in real life, or why you would want to use them in the first place.  While we were spending time writing expressions that match ‘aaaaaaabababaaaaa‘ and drawing the automata that “implement” that, the knowledge of how to apply this to a real life problem, such as extracting data artifacts from raw data, was completely lost on the class.

What if the instructor had started the class by showing us this:

pattern = '([a-zA-Z0-9_.]+)@([a-zA-Z0-9_.]+\.\w{2,3})'
matchObj = re.match( text, pattern )
if matchObj:
email = matchObj.groups(0)
account = matchObj.groups(1)
domain = matchObj.groups(2)

If you’re not familiar, this brief example in python-esque pseudo code demonstrates how to match, and extract email addresses, accounts and domains from text.

I don’t think I’m saying anything new here, but too many technical classes both in academia and out, spend a disproportionate amount of time on the underlying theory, whilst simultaneously ignoring, or downplaying the actual application of the concepts being taught.  The result is that many students walk away frustrated, not understanding the actual use of what they are learning, and while professors and instructors may pat themselves on the back for preserving the “purity” of their curricula, I would argue that they have utterly failed in their task of educating their students.

The bottom line here, is that some people are really interested in theory, however for knowledge to be translated into something useful, students should be exposed early and often to a theory’s application and in conclusion, if you are designing some STEM training or a classes at a university don’t forget the importance of demonstrating how to apply the concepts you are teaching.

Leave a Comment

Something is Rotten in the State of Data

I’m writing this blog post in the departure lounge at Heathrow, on my way back from Strata + Hadoop World, London.  Whilst at Strata, speakers kept coming back to the idea that an ever growing number of large businesses are not really happy with the investments they have made in analytics and data science.  One of the speakers quoted a Forrester 2016 survey which claimed that:

  • 29% of firms are good at translating analytics results into measurable business outcomes
  • -20% change in satisfaction with analytics initiatives between 2014 and 2015
  • 50% of firms expecting to see stagnation or a decrease in big data/data lake investments in 2016.

These are very disappointing numbers, however not completely unsurprising.  Using the “5 why” technique an intra-management dialogue might go something like this:

2 Comments

A Social Contract for Data Collection

I just returned from Strata + Hadoop World in San Jose, where I gave a talk entitled “Kosher Collection: Best Practices in Data Handling“.  I really had an amazing time at Strata this year and major kudos to the organizers for putting on a great show.

The central premise of my talk is that in today’s world, there is a social contract between data collectors and consumers.  Essentially the agreement is that consumers give their personal data to a data collector in exchange for mutual benefit.  The problem is that consumers, in general, lack an understanding of the technology as well as data collection and as a result, are unable to provide informed consent.  Furthermore, this issue is likely to be exacerbated in the future as the opportunity to opt-out of mass data collection is disappearing.

1 Comment

The Two Most Important Skills for a Data Scientist

unnamedI saw an article a little while ago on LinkedIn (which at the time of writing I cannot find) but the basic premise of the article was that problem solving was the most important skill for data scientists to be effective at their job.  (If anyone can find the article, please send me a PM as I’d like to credit the author.)   The article stood out to me because most articles that begin with “Top x Skills for a Data Scientist” usually feature some list like this one:

  1. Education
  2. SAS or R
  3. Machine Learning
  4. Advanced Statistics
  5. Python
  6. Hadoop
  7. SQL
  8. Unstructured Data
  9. Intellectual Curiosity
  10. Business Acumen
  11. Communication Skills

While many people like to focus on various technical aspects of data science, such as data engineering, or machine learning, at the end of the day, if you are unable to apply these skills in a practical way towards real life problems, you will not be an effective data scientist.  In my view, effective data scientists must be masters at both problem solving and critical thinking.  There are clear relationships between the two but I think it is fair to say that there are differences in that not all problems require critical thinking and vice versa.

Data Science is the Union of Critical Thinking and Problem Solving

To illustrate this union, there is a famous example from WWII.  In an effort to minimize the number of bombers shot down over Europe, the Center for Naval Analyses (CNA) had conducted a study of the damage done to aircraft that had returned from missions, and had recommended that armor be added to the areas that showed the most damage.  This would seem like a relatively easy optimization problem…put more armor where there are a lot of bullet holes.  Indeed, this was the conclusion at which the CNA arrived.

However, a mathematician named Dr. Avraham Wald was given the same data consisting of how bullet holes were distributed across aircraft which returned from sorties and realized that there was a problem with these conclusions.  Dr. Wald realized that this study only considered aircraft which had returned from sorties and not those which had been shot down and therefore, the correct conclusion was to put additional armor where there was little damage.  In effect, the bullet holes demonstrated that the aircraft could take damage in those areas and still fly, whereas areas where there were few bullet holes demonstrated the opposite.   The Allied forces adopted this conclusion and significantly improved their survival rates.

To me this demonstrates the perfect blend of problem solving and critical thinking.  The original analysis, was simple problem solving.  Someone was tasked with figuring out where to put more armor.   They did some analysis, came up with an answer and–in their eyes at least–solved the problem.  Unfortunately, the conclusion was completely incorrect.  The critical thinking came in when Dr. Wald asked the question of “What is missing in this data?” (Or most likely “Was in dieser Daten fehlt?“) and ultimately led Dr. Wald to the correct conclusion.

Applying Critical Thinking to Data Science

My first real introduction to critical thinking came not in the university, but rather during my first few months working as an intelligence analyst at the Central Intelligence Agency.  At the time, after about four months, new analysts would be pulled out of their offices and enrolled in a four month long, analytic training program.  I can honestly say that I learned more about analysis in this program, than I did in my entire college career.   (Unfortunately, this says a lot about the state of higher education, but we’ll leave that for another time)  While unfortunately, I cannot get into the details, some of the most interesting lessons centered around examining intelligence failures, and how when critical thinking is replaced by “group-think” or other cognitive biases, the results are usually not good.  The course forced you to look at situations with a critical eye, examining all assumptions and putting hypotheses through an extremely high degree of rigor.  I believe that the thought processes and critical thinking skills I learned at the Agency are what enabled me to approach data problems with a unique perspective and as a result, come up with effective solutions.

Many people ask me how they can become a data scientist, and while I can recommend various courses that will help you develop your technical skills, the application of these skills–and ultimately your success as a data scientist–ultimately depends on your critical thinking and problem solving abilities.  Can you look at an analysis and find the logical flaws?  Can you think critically about problems?  In the classes which I teach, I always try to infuse critical thinking into the exercises so that students must think about their answers and ask things like “Does this answer make sense?” etc.  (Naturally, I’ll put in a plug for my upcoming classes at BlackHat in Las Vegas in August: Crash Course in Data Science for Hackers, and the Crash Course in Machine Learning)

Learning Critical Thinking Skills

Hopefully, by now I’ve convinced you of the need for critical thinking to be a part of a data scientist’s toolkit. You might be asking yourself, how do I hone this skill?  Well… stay tuned, as that sounds like the subject of my next post!

3 Comments

On Data Science is Not A Binary Condition

There are a lot of bytes floating about the internet making the case for who is a data scientist, including a whole subset of posts about who are “real” data scientists.  I have observed that this philosophy manifests itself in data science training programs as well.  Right now, if you want to become a data scientist, it’s an all-or-nothing proposition.  I would place the available programs in three categories:

  1.  Online Training from Companies such as Coursera, Udacity and others
  2. 12-14 week bootcamp-style data science courses
  3. University programs

Many of these programs look amazing and if I had the time I would love to take one (or all) of them.  With that said, all these programs take the approach that the students will emerge from these programs as data scientists and as result, they teach a comprehensive set of skills to their students.

A Different Approach

There are certain professions in which you either are a member or not, and non-members practicing the profession could have disastrous consequences.  Medicine for instance.  You either are a medical doctor or you are not, and when individuals who are not qualified to practice medicine are dispensing medical advice, the results are often disastrous–as best illustrated by the anti-vaccine movement which at this point is being pushed by many celebrities rather than legitimate medical professionals.  Engineering would be another example.  You wouldn’t want to drive over a bridge that I designed because well… I have no idea how to do it.

However, these professions all have professional associations and government regulators who determine who has the requisite skills to call themselves members of that profession.  This does not exist for data science, for good reason.  Data Science as practiced today, is a mixture of several other disciplines, and can be applied to nearly any other discipline.  The breadth of skills that fall under the data science umbrella is so enormous.

In 2013, O’Reilly published a short booklet entitled Analyzing the Analyzers in which they identified four main groupings of individuals who considered themselves data scientists which were:

  • Screen Shot 2016-02-29 at 11.47.44Data Creative
  • Data Developer
  • Data Researcher
  • Data Businessperson

You can see from the graphic on the left that the skill breakdown is far from homogenous.  Data Science differs from other professions in other professions–such as medicine–practitioners MUST have a core set of knowledge in order to be a member of that profession.  As illustrated in the chart on the left, that is not really the case for data science.  One could be a machine learning master, yet have no experience with big data and still legitimately be said to be doing data science work.  Therefore, I’d like to propose that data science be viewed as a spectrum of skills rather than a binary condition.

In short, I believe that data scientists spill too much ink trying to label individuals as individuals (or as “fake” data scientists). Instead data scientists should spend time determining what skills actually make up data science.

Teaching Data Science Skills Rather than Data Science

This slight twist on the approach has implications for training.  If data science is no longer an exclusive club, but rather a collection of skills, those skills, or portions thereof can be taught to individuals who have no interest in becoming data scientists.  In other words, data science training could be about teaching anyone who does analytic work data science skills which they can incorporate into their workflow.  The goal of this kind of training would not be to mint full data scientists, but rather, teach individuals data science skills relevant to their profession.  This kind of training could be a lot shorter and less comprehensive, but in the end, I believe that it would be more practical for the thousands of individuals out there seeking to incorporate data science into their work but don’t really have the time to put into it or the desire to become data scientists.  Mind you, I am not proposing dumbing data science down, but rather I am suggesting that in addition to what is currently offered, data science training can be effective if it is tailored to specific audiences, and focuses on the techniques that would be directly relevant to those audiences.

TL;DR

The data science world today views data science as a binary condition: either you are or not a data scientist.  However data science should be viewed as a collection of skills which virtually any professional can incorporate into their workflow.  If data science training was viewed in this context, organizations could increase their use of data science by training their current staff in the data science skills that are relevant to their work.

1 Comment

Teaching Data Science in English (not in Math)

chalkboardI spend most of my time now teaching others about data science and as such I do a lot of research into what is going on with respect to data science education.  As such I decided to take an online machine learning course and it led me to a serious question: why don’t we use pseudo-code to teach math concepts?

Consider the following:
34bd2b1ce9d35d34c115548ad24846fc

 

 

This is the formula for Residual Sum of Squares, which if you aren’t familiar, is a metric used to measure the effectiveness of regression models.

Now consider the following pseudo-code:

residuals_squared = (actual_values - predictions) ^ 2
RSS = sum( residuals_squared )

This example expresses the exact same concept and while it does take up more space on the page, in my mind at least, is much easier to understand.  I don’t have any empirical data to back this up, but I would suspect that many of you would agree.

Greek Letters are Jargon

Another thing I’ve realized is that part of the reason math becomes so difficult for people is that it is entirely taught in jargon, shorthand, and shorthand for shorthand.  The greek letter sigma represents a sum, but if you don’t know that then it represents confusion.  If you aren’t familiar with this formula, then the other Greek letters could be meaningless, yet if we used pseudocode, any part of this formula could be rewritten using English words (or any other language) and thus easily understood by anyone.

Crash Course in Machine Learning

I’m working on developing a short course in Machine Learning called Crash Course in Machine Learning which I will be teaching at the BlackHat conference in August.  I’m curious as to what people think about presenting algorithms using pseudo-code instead of math jargon.  I suspect it will make it easier for people to understand without diluting the rigor.

5 Comments

Data Science Classes at BlackHat 2016!

I’m very pleased to announce that this year, my team and I got two classes accepted for the BlackHat conference in Las Vegas!  I believe that data science and machine learning have a huge role to play in infosec/cybersecurity, and in a way, it really is a domain which is crying out for data science to be used.  There are ever expanding amounts of data , the actors are becoming more sophisticated, and the security professionals are almost always strained for resources.  Our classes won’t turn you into data scientists, but you will learn how to directly apply data science techniques to cybersecurity.  If this sounds interesting to you, please check out our Crash Course in Data Science and our Crash Course in Machine Learning.  Both are two day classes and will be offered from July 30-31st and Aug 1-2, 2016.

Crash Course in Data Science (for Hackers)

Crash Course in Data ScienceThis interactive course will teach network security professionals how to use data science techniques to quickly write scripts to manipulate and analyze network data. Students will learn techniques to rapidly write scripts to improve their work. Participants will learn now to read in data in a variety of common formats then write scripts to analyze and visualize that data. A non-exhaustive list of what will be covered include:

  • How to write scripts to read CSV, XML, and JSON files
  • How to quickly parse log files and extract artifacts from them
  • How to make API calls to merge datasets
  • How to use the Pandas library to quickly manipulate tabular data
  • How to effectively visualize data using Python
  • How to apply simple machine learning algorithms to identify potential threats

Finally, we will introduce the students to cutting edge Big Data tools including Apache Spark and Apache Drill, and demonstrate how to apply these techniques to extremely large datasets.

Crash Course in Machine Learning (for hackers):

ccmlThis interactive course will teach network security professionals machine learning techniques and applications for network data. This course is a continuation of the skills taught in the Crash Course in Data Science for Hackers. Students will learn various machine learning methods, applications, model selection, testing, and interpretation. Participants will write code to prepare and explore their data and then apply machine learning methods for discovery.

A non-exhaustive list of what will be covered include:

  • Machine Learning Introduction and Terminology
  • Foundations of Statistics
  • Python Machine Learning Packages Introduction
  • Data Exploration and Presentation
  • Supervised Learning Methods
  • Unsupervised Learning Methods
  • Model Selection and Testing
  • Machine Learning Applications for Network Data
Leave a Comment

Data Driven Security Podcast

ddspcI recently had the opportunity to speak on the Data Driven Security Podcast with Jay Jacobs and Bob Rudis about data science training.  You can listen to the podcast here.

To underscore a few points from the interview:

Data science is not a binary condition.  Many people with whom I have spoke, or read, talk about “real” data science and/or “fake” data scientists.  Unlike medicine, or law, in data science one need not be a “data scientist” to employ data science in one’s work.   In practical terms, this means that data science can be viewed as a spectrum of skills which can range from beginner to expert, and most  importantly, you don’t need to be a “real data scientist” to use data science techniques.  In fact, it is my opinion that in the next few

When designing training sessions for working professionals, I try to approach them with that in mind and build courses that teach the thought process behind data science, as well as practical skills which students can directly apply to their jobs.  The objective of the classes are not to convert students into data scientists, but again, to teach useful data science skills which are relevant to their work.

If you view training development where the goal is to teach a professional a series of relevant skills instead of a new discipline, that translates into developing short, focused classes rather than lengthy bootcamps.

Data Science is more than Machine Learning

I’ve reviewed a lot of data science courses, and many focus very heavily on machine learning and statistics.  While this is certainly an important aspect of data science, study after study shows that data scientists spend 50-90% of their time doing data preparation and cleansing.  With that in mind, when designing courses, I try to spend a decent amount of time on data wrangling techniques.

Anyway, please listen to the podcast here and enjoy!  Questions/comments are welcome!

2 Comments

Let’s Stop Using The “Fake Data Scientist” Label

There was a post on KDNuggets yesterday entitled 20 Questions to Detect Fake Data Scientists  by Andrew Fogg, and after reading the questions, I had to wonder what is “real” data science.  All of the 20 questions in this article focused around statistics/machine learning or data visualization, and even the stats questions seemed to be very focused on particular areas of emphasis.  I would argue that this blog was an excellent example of Mirroring Bias or in other words:  I am a data scientist, and these are all the fundamental skills which I deem important, therefore in order for me to deem you worthy of the title Data Scientist, you must have these skills.

Here are the questions:

  1. Explain what regularization is and why it is useful.
  2. Which data scientists do you admire most? which startups?
  3. How would you validate a model you created to generate a predictive model of a quantitative outcome variable using multiple regression.
  4. Explain what precision and recall are. How do they relate to the ROC curve?
  5. How can you prove that one improvement you’ve brought to an algorithm is really an improvement over not doing anything?
  6. What is root cause analysis?
  7. Are you familiar with pricing optimization, price elasticity, inventory management, competitive intelligence? Give examples.
  8. What is statistical power?
  9. Explain what resampling methods are and why they are useful. Also explain their limitations.
  10. Is it better to have too many false positives, or too many false negatives? Explain
  11. What is selection bias, why is it important and how can you avoid it?
  12. Give an example of how you would use experimental design to answer a question about user behavior.
  13. What is the difference between “long” and “wide” format data?
  14. What method do you use to determine whether the statistics published in an article (e.g. newspaper) are either wrong or presented to support the author’s point of view, rather than correct, comprehensive factual information on a specific subject?
  15. Explain Edward Tufte’s concept of “chart junk.”
  16. How would you screen for outliers and what should you do if you find one?
  17. How would you use either the extreme value theory, monte carlo simulations or mathematical statistics (or anything else) to correctly estimate the chance of a very rare event?
  18. What is a recommendation engine? How does it work?
  19. Explain what a false positive and a false negative are. Why is it important to differentiate these from each other?
  20. Which tools do you use for visualization? What do you think of Tableau? R? SAS? (for graphs). How to efficiently represent 5 dimension in a chart (or in a video)?  (From 20 Questions to Detect Fake Data Scientists  by Andrew Fogg)

The trouble is that data science is interdisciplinary–a mixture of domain expertise, computer science and applied mathematics.  Therefore, “true” data scientists have expertise in all three disciplines that make up data science.  However these questions completely virtually ignore the domain and computer science disciplines to say nothing about big data, unstructured data etc..  For example, if someone were to ask these questions of an expert in computer vision, that candidate might do poorly because their skills–which are certainly in the realm of data science–do not fall neatly on this list.  Likewise for someone who is an expert in streaming text analytics.

If I were going to construct such a list, I might take about five of these questions and add questions like:

  1. What are the advantages of NoSQL systems compared with traditional databases?
  2. Which tools do you use to manipulate data?  Why?
  3. How do you determine the efficiency of an algorithm?
  4. Explain some common methods for analyzing free text.

However, I would not construct such a list in the first place.  The bottom line is that virtually any data scientist could probably come up with a list that would mis-label other data experts as fakes simply by asking questions about their weak areas.

Data Science is about Solving Problems

Data Science is about extracting useful and actionable information from data.   As such, when I interview people the most important thing for me is the candidate’s problem solving and analytical abilities.  I’ll pick people to interview who have a background which would lead me to believe that they would have experience in the data science realms, and then ask them to solve open ended problems.  My real interest is not whether they arrive at a solution, but rather to see how they think about problems.   A good (or “real”) data scientist will be able to identify the problem and use the skills listed above to solve the problem and therefore there is no need to pepper the candidate with a pop quiz of stats questions.

There is a great talk from Daniel Tunkelang about hiring data scientists in which he discusses his process of hiring data scientists and comes to a similar conclusion.

Let’s use a more positive, less exclusive term

Since data science covers so many areas, I think we can take it as a given that virtually nobody can truly be a master of everything.  Therefore, perhaps instead of using the label “Fake Data Scientist”, I would use the label “Novice Data Scientist”, or “Junior Data Scientist“.  I believe that everyone has the capacity to grow and learn new skills.  We weren’t all born with an understanding of deep learning or Markov chains and if someone lacks certain requisite knowledge, instead of labeling them as a phony, it is a better approach to view that candidate as a beginner who needs additional skills and provide them with suggestions as to how to acquire those skills.

3 Comments

Tips for Debugging Code without F-Bombs – Part 1

Debugging code is a large part of actually writing code, yet unless you have a computer science background, you probably have never been exposed to a methodology for debugging code.  In this tutorial, I’m going to show you my basic method for debugging your code so that you don’t want to tear your hair out.

In Programming Perl, Larry Wall, the author of the PERL programming language said that the attributes of a great programmer are Laziness, Impatience and Hubris:

  • Laziness:  The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don’t have to answer so many questions about it. Hence, the first great virtue of a programmer.  (p.609)
  • Impatience:  The anger you feel when the computer is being lazy. This makes you write programs that don’t just react to your needs, but actually anticipate them. Or at least pretend to. Hence, the second great virtue of a programmer. See also laziness and hubris. (p.608)
  • Hubris:  Excessive pride, the sort of thing Zeus zaps you for. Also the quality that makes you write (and maintain) programs that other people won’t want to say bad things about. Hence, the third great virtue of a programmer. See also laziness and impatience. (p.607)

These attributes also apply to how to write good code so that you don’t have to spend hours and hours debugging code.

The Best Way to Avoid Errors is Not to Make Them

Ok… so that seems obvious, but really, I’m asking another question and that is: “How can you write code that decreases your likelihood of making errors?”  I do have an answer for that.   The first thing is to remember is that bugs are easy to find when they are small.  To find bugs when they are small, write code in small chunks and test your code frequently.  If you are writing a large program, write a few lines and test what you have written to make sure it is doing what you think it is supposed to do.  Test often.  If you are writing a script that is 100 lines, it is MUCH easier to find errors if you test your code every 10 lines rather than write the whole thing and test at the end.  The better you get, the less frequently you will need to test, but still test your code frequently.

Good Coding Practices Will Help You Prevent Errors

This probably also seems obvious, but I’ve seen (and written) a lot of code that leaves a lot to be desired in the way of good practices.  Good coding practices mean that your code should be readable and that someone who has never seen your code before should be able to figure out what it is supposed to do.  Now I know a lot of people have the attitude that since they are the only one working on a particular piece of code, then they don’t need to put in comments.  WRONG WRONG WRONG  In response, I would ask you in 6 months, if you haven’t worked on this, would you remember what this code did?   You don’t need to go overboard, but you should include enough comments so that you’ll remember the code’s purpose.

Here are some other suggestions:

  1. Adopt a coding standard and stick to it:  It doesn’t matter which one you use, but pick one and stick to it.  That way, you will notice when things aren’t correct.  Whatever you do, don’t mix conventions, ie don’t have column_total, columnTotal and ColumnTotal as variables in the same script.
  2. Use descriptive variable names:  One of my pet peeves about a lot of machine learning code is that they use X, Y as variable names.  Don’t do that.  This isn’t calculus class.  Use descriptive variable names such as test_data, or target_sales, and please don’t use X, Y, or even worse, i, I, l and L as variable names.
  3. Put comments in your code:  Just do it.
  4. Put one operation per line:  I know that especially in Python and JavaScript, it is fashionable (and “Pythonic”) to cram as many operations onto one line as possible via method chaining.  I personally think in series of steps and it is easier to see the logic (and hence any mistakes) if you have one action per line.

Plan your program BEFORE you write it

I learned this lesson the hard way, but if you want to spend many hours writing code that doesn’t work, when faced with a tough problem, just dive right in and start coding.  If you want to avoid that, get a piece of paper and a pen (or whatever system you like) and:

  1. Break the problem down into the smallest, most atomic steps you can think of
  2. Write pseudo-code that implements these steps.
  3. Look for extant code that you can reuse

Once you’ve found reusable code, and you have a game plan of pseudo code, now you can begin writing your code.  When you start writing, check every step against your pseudo code to make sure that your code is doing what you expect it to do.

Don’t Re-invent the Wheel

Another way to save yourself a lot of time and frustration is to re-use proven code to the greatest extent possible.  For example, Python has a myriad of libraries available at Pypi and elsewhere which really can save you a lot of time.  It is another huge pet peeve of mine to see people writing custom code for things which are publicly available.  This means that before you start writing code, you should do some research as to what components are out there and available.

After all, if I were to ask you if you would rather:

  1.  Use prewritten, pretested and proven code to build your program OR
  2. Write my own code that is unproven, untested and possibly buggy

the logical thing to do would of course be to do the first.

In Conclusion

Great programmers never sit down at the keyboard and just start banging out code without having a game plan and without understanding the problem they are trying to solve.  Hopefully by now you see that the first step in writing good code that you won’t have to debug is to plan out what you are trying to, reuse extant code and test frequently. In the next installment, I will discuss the different types of errors and go through strategies for fixing them.

Leave a Comment