**Decision Tree Classifier** is a classification model that can be used for simple classification tasks where the data space is not huge and can be easily visualized. Despite being simple, it is showing very good results for simple tasks and outperforms other, more complicated models.

This article is part two of a two-article mini-series on the Decision Tree Classifier. For a detailed overview of the logic and principles behind this model, please check part-one: Decision Tree Classifiers Explained.

You can also check out how to build Random Forest Classifiers by employing more decision trees into action.

Interested in more stories like this? Follow me on Twitter at @b_dmarius and I'll post there every new article.

**Article Overview:**

- Decision Tree Classifier Dataset
- Decision Tree Classifier in Python with Scikit-Learn
- Decision Tree Classifier - preprocessing
- Training the Decision Tree Classifier model
- Using our Decision Tree model for predictions
- Decision Tree Visualisation

## Decision Tree Classifier Dataset

Recently I've created a small dummy dataset to use for simple classification tasks. I'll paste the dataset here again for your convenience.

The purpose of this data is, given 3 facts about a certain moment(the weather, whether it is a weekend or a workday or whether it is morning, lunch or evening), can we predict if there's a traffic jam in the city?

## Decision Tree Classifier in Python with Scikit-Learn

We have 3 dependencies to install for this project, so let's install them now. Obviously, the first thing we need is the scikit-learn library, and then we need 2 more dependencies which we'll use for visualization.

Now let's import what we need from these packages.

Now let's load our dataset. In a previous article on the Naive Bayes Classifier I've defined a few helper methods to easily load our simple dataset. Let's reuse that.

## Decision Tree Classifier - preprocessing

We know that computers have a really hard time when dealing with text and we can make their lives easier by converting the text to numerical values.

#### Label Encoder

We will use this encoder provided by scikit to transform categorical data from text to numbers. If we have *n* possible values in our dataset, then LabelEncoder model will transform it into numbers from 0 to *n*-1 so that each textual value has a number representation.

For example, let's encode our time of day values.

## Training the Decision Tree Classifier model

Now let's train our model. So remember, since all our features are textual values, we need to encode all our values and only then we can jump to training.

## Using our Decision Tree model for predictions

Now we can use the model we have trained to make predictions about the traffic jam.

And it seems to be working! It correctly predicts the traffic jam situations given our data.

## Decision Tree Visualisation

Scikit also provides us with a way of visualizing a Decision Tree model. Here's a quick helper method I wrote to generate a *png* image from our decision tree.

And here's the result from that.

Yes, so I'm very happy with the results we got today. We used a really small dataset and a really small model, but I think we've learned a lot about this type of classifier.

If you want to explore more about the logic behind the Decision Tree Classifier, then please don't forget to read the other article in this mini-series: Decision Tree Classifiers Explained.

You can also check out how to build Random Forest Classifiers by employing more decision trees into action.

*Thank you so much for reading this! Interested in more stories like this? Follow me on Twitter at @b_dmarius and I'll post there every new article.*