Unleashing the Revolutionary Power of AI in Data Entry and Processing: Anticipating Unprecedented Advances Before 2025

Data entry and processing is one of the key areas where Artificial Intelligence (AI) is expected to have a major impact in the coming years. With the increasing amount of data being generated every day, the demand for faster and more efficient data processing has never been higher. Fortunately, AI technology is here to help meet this demand and take data entry and processing to the next level.

One of the main advantages of AI in data processing is its ability to automate manual data entry. This means that instead of relying on human data entry clerks, AI algorithms can process and categorize vast amounts of data much more efficiently and accurately. AI algorithms can also identify patterns and relationships within the data, allowing for more comprehensive data analysis.

Another key area where AI is expected to enhance data entry and processing is in natural language processing (NLP). NLP is a subfield of AI that focuses on the interactions between computers and humans in natural language. With advancements in NLP, AI will soon be able to understand and interpret written and spoken human language, making data entry and processing even more seamless.

Before 2025, we can expect to see significant advancements in AI’s ability to process and analyze unstructured data, such as images, videos, and audio. AI algorithms will be able to automatically identify and categorize information within these types of data, making data entry and processing much easier and more efficient. Additionally, AI will be able to process multiple languages, further expanding its reach and impact on data entry and processing.

Another exciting development in the field of AI and data entry and processing is the use of machine learning. Machine learning is a type of AI that allows algorithms to learn and improve over time through experience. With machine learning, AI algorithms can become more accurate and efficient at processing and analyzing data, reducing the risk of human error and improving the overall accuracy of the data.

In conclusion, the next few years will bring significant advancements in the field of AI and data entry and processing. From automating manual data entry to processing unstructured data and utilizing machine learning, AI has the potential to greatly enhance the accuracy and efficiency of data processing. By embracing these changes, we can look forward to a future where data entry and processing is seamless and accurate, providing valuable insights and helping organizations make better data-driven decisions.

AI Job Automation: What to Expect in the Next 5 Years and How to Stay Ahead of the Game

As the world of technology evolves, Artificial Intelligence (AI) is becoming an increasingly influential force in our lives and careers. With the advent of new and innovative AI technologies, we can expect to see a major transformation of the job market over the next few years.

One of the most significant changes we can expect to see is the automation of a wide range of tasks that were once performed by humans. This shift may seem daunting at first, but it’s important to keep in mind that it will also lead to the creation of new job opportunities.

Here are just a few of the areas where AI is expected to have a major impact:

Data entry and processing: With the help of advanced AI algorithms, vast amounts of data can now be processed and categorized with incredible efficiency and accuracy. This will result in a significant reduction of manual data entry tasks.

Customer service: AI-powered chatbots and virtual assistants are already helping many companies handle customer inquiries and support. In the years to come, these systems are only going to become even more advanced and capable of handling even more complex tasks.

Manufacturing and logistics: AI is revolutionizing production processes and reducing the need for manual labor in manufacturing and logistics. This technology can be used to optimize production runs, streamline supply chains, and minimize waste, ultimately improving efficiency and cost-effectiveness.

Sales and marketing: By analyzing customer data, AI can predict which customers are most likely to make a purchase, allowing companies to tailor their sales and marketing efforts with greater precision. AI can also automate tasks such as lead generation and email campaigns, freeing up valuable time and resources for sales and marketing teams.

Healthcare: AI is having a major impact on healthcare by automating many tasks and improving patient outcomes. For example, AI algorithms can be used to process medical images, diagnose illnesses, and develop personalized treatment plans.

As AI continues to shape the job market, it’s important for individuals to embrace new opportunities and develop new skills. Fields such as AI development, data analysis, and cybersecurity will likely be in high demand, and those who invest in these areas will be well positioned for success in the years to come.

In conclusion, AI is not something to be feared, but rather an exciting opportunity to be embraced and utilized to its full potential. The next five years are going to be an incredible time for AI, and we can’t wait to see the impact it will have on the job market and beyond!

Maximizing Your Earnings with ChatGPT: A Guide for Aspiring AI Experts

Are you interested in making money with AI but not sure where to start? OpenAI’s ChatGPT is a powerful tool that has the potential to provide new opportunities for monetization and help you turn your AI expertise into a profitable venture. In this blog post, we will explore how you can use ChatGPT to earn money online, even if you are new to the field of AI.

  1. Offer ChatGPT-powered Customer Service: One of the easiest ways to get started with earning money using AI is by offering ChatGPT-powered customer service. Customer service is a critical component of any business, and many companies struggle to keep up with the volume of inquiries they receive. That’s where ChatGPT comes in – it can provide quick and personalized responses to customers 24/7, freeing up human customer service representatives to focus on more complex inquiries. By offering ChatGPT-powered customer service to businesses, you can earn a recurring income stream by charging a monthly fee for your services.
  2. Develop AI-powered Chatbots for E-commerce: Another way to earn money with ChatGPT is by developing AI-powered chatbots for e-commerce websites. Chatbots are becoming increasingly popular for online retailers as they provide instant support and recommendations to customers. With ChatGPT, you can create custom chatbots that can help e-commerce websites improve the customer experience. By developing chatbots for e-commerce websites, you can earn a one-time fee for your services and potentially earn recurring revenue through ongoing maintenance and updates.
  3. Create ChatGPT-powered Virtual Assistants: Virtual assistants are becoming more and more common in both personal and professional settings. With ChatGPT, you can create virtual assistants that can perform a range of tasks, such as scheduling appointments, answering frequently asked questions, and even making recommendations. As demand for virtual assistants continues to grow, there is a huge opportunity for aspiring AI experts to develop and sell these systems to businesses and individuals. You can earn money by charging a fee for your virtual assistant services or by selling the software outright.
  4. Offer ChatGPT Training and Consultation Services: Finally, you can monetize your AI expertise by offering ChatGPT training and consultation services to businesses and individuals. As ChatGPT continues to grow in popularity, there will be an increasing demand for experts who can help organizations and individuals understand and effectively utilize this technology. By offering training and consultation services, you can earn a fee for your expertise and help others take advantage of the potential of ChatGPT.

In conclusion, there are many ways for aspiring AI experts to use ChatGPT to earn money online. Whether you are interested in offering customer service, developing chatbots, creating virtual assistants, or offering training and consultation services, the potential for monetization is significant. Keep in mind that the key to success is staying up-to-date with the latest advancements in AI technology and marketing your services effectively. Don’t be intimidated by the fact that you are new to AI – with the right tools and resources, you can quickly become an expert and start earning money with ChatGPT.

Book Review – Hello World: Being Human in the Age of Algorithms – Part 1

Introduction
I often think of AI as something separate from traditional computer programming, something transcendent. However, most of the advances in modern AI are not the result of revolutionary new concepts or fields of study but rather the application of previously developed algorithms to significantly more powerful hardware and massive datasets.

Hannah Fry’s take on the world of AI covers topics ranging from justice to autonomous vehicles, crime, art and even to medicine. While the author is an expert in the field, she does a great job distilling the topics down to a level understandable by a layperson, but also keeps it interesting for someone with more background in programming and AI.

My favourite quote from the first part of the book comes on page 8, where Hannah succinctly describes the essence of what an algorithm is in only one sentence:

An algorithm is simply a series of logical instructions that show, from start to finish, how to accomplish a task.

Fry, Hannah. Hello World: Being Human in the Age of Algorithms (p. 8). W. W. Norton & Company. Kindle Edition

Once you read it, it seems obvious, but trying to describe to a first-year computer science student what an algorithm is can be a challenging task. The author manages this well. Despite the complexity and depth of the subject matter, Fry is able to bring context and relevance to a broad array of topics. The remainder of my review will speak to some of the book’s many sections and how someone with a business-facing view into the topics sees them.

Data
This section covers some of the unknown giants in data-science including Peter Thiel’s Palantir. The section also touches on some very public examples where analytics has played a negative role – Cambridge Analytica’s use of private user data during the 2016 Presidential Elections.

The story here is about data brokers. Data brokers are companies who buy and collect user data and personal information and then resell it or share it for profit. A surprising fact is that some of these databases contain records of everything that you’ve ever done from religious affiliations to credit-card usage. These companies seem to know everything about just about everyone. It turns out that it is relatively simple to make inferences about a person based on their online habits.

The chapter converges to one of the major stories of 2018, the Cambridge Analytica scandal. But it begins by discussing the five personality traits that psychologists have used to quantify individuals’ personalities since the 1980s: openness to experience, conscientiousness, extraversion, agreeableness and neuroticism. By pulling data from users’ Facebook feeds, Cambridge Analytica was able to create detailed personality profiles to deliver emotionally charged and effective political messages.

Perhaps the most interesting fact though, is how small of an impact this type of manipulation actually has. The largest change reported was from 11 clicks in 1000 to 16 clicks in 1000 (less than 1 percent). But even this small effect, spread over a population of millions can cause dramatic changes to the outcome of, say, an election.

That’s the end of part 1 of this review. In Part 2, I’ll touch on some of the other sections of the book including Criminal Justice and Medicine.

AI Everything

These days it seems like businesses are trying to use AI to do everything. At least for startups, that isn’t far off. Anywhere there is a dataset remotely large enough and an answer that is vaguely definable, companies are putting together a business model to use machine learning to solve the problem. With some incredible successes in areas like image classification and defeating humans at video games, its hard not to be impressed.

One of the best channels for following recent breakthroughs in AI is the 2 Minute Papers YouTube Channel, started by Károly Zsolnai-Fehér, a professor at the Vienna University of Technology in Austria. Károly’s videos combine interesting clips of the programs in action with well-delivered summaries of recent papers illustrating advances in artificial intelligence.

In one of his latest videos, he covers an AI that not only can copy the most successful actions that humans take in video games but can actually improve on those actions to be better than the best human players. So does that mean that AI will be displacing office workers once it learns how to do their jobs better than them? Probably, yes. But maybe not quite how you think it might.

As much of a ‘black-box‘ as AI has been in the past, modern systems are becoming better and better at explaining how they arrived at an answer. This gives human operators predictive capabilities that we didn’t have with systems of the past that could spit out an answer but gave us no indication of how that answer was formulated.

This Forbes article on Human-Centric AI provides some examples of how modern AI systems can be implemented to train employees to do their jobs better and even enjoy their jobs more while doing it! If that doesn’t sound incredible to you, you may be a machine who is only reading this page to improve your search algorithm.

So what does this all mean? A lot of research is showing that AI is actually creating many more jobs than it destroys. So, as long as you’re willing to try and understand the systems that will one day be our overlords, you should be able to upgrade your career and stay employed.

Whether you still want the job that remains is another question entirely.

Turning Your Selfie Into a DaVinci

Transfer learning. It’s a branch of AI that allows for the style transfer from one image to another. It seems like a straightforward concept: take my selfie and make it look like a Michelangelo painting. However, it is a fairly recent innovation in Deep Neural Networks that has allowed us to separate the content of an image from its style. And in doing so, to combine multiple images in ways that were previously impossible. For example, taking a long-dead artist’s style and applying it to your weekend selfie.

Just to prove that this is pretty cool, I’m going to take my newly built style transfer algorithm and apply it to a ‘selfie’ of my good dog, Lawrence. Here’s the original:

And here’s the image that I’m going to apply the style of:

That’s right, it’s Davinci’s Mona Lisa, one of the most iconic paintings of all time. I’m going to use machine learning to apply Davinci’s characteristic style to my iPhone X photo of my, admittedly very handsome, pupper.

If you’re interested, here’s a link to the original paper describing how to use Convolutional Neural Networks or CNNs to accomplish image style transfer. It’s written in relatively understandable language for such a technical paper so I do recommend you check it out, given you’re already reading a fairly technical blog.

So what is image content and style and how can we separate out the two? Well, neural networks are built in many layers, and the way it works out, some of the layers end up being responsible for detecting shapes and lines, as well as the arrangement of objects. These layers are responsible for understanding the ‘content’ of an image. Other layers, further down in the network are responsible for the style, colors and textures

Here’s the final result next to the original.

Pretty striking, if I do say so myself.

Using a pre-trained Neural Network called VGG19 and a few lines of my own code to pull the figures and what’s called a Gram Matrix I choose my style weights (how much I want each layer to apply). Then using a simple loss function to push us in the right direction we apply the usual gradient descent algorithm and poof. Lawrence is forever immortalized as a Davinci masterpiece.

Impressed? Not Impressed? Let me know in the comments below. If you have anything to add, or you think I could do better please chime in! This is a learning process for me and I’m just excited to share my newfound knowledge.

Here’s a link to my code in a Google Colab Notebook if you want to try it out for yourself!

How Afraid Should You Be of AI?

A friend sent me a video today. It started off rather innocuously, with a program called EarWorm, designed to search for Copyrighted content and erase it from memory online. As many of these stories do, it escalated quickly. Within three days of being activated by some careless engineers with no backzground in AI ethics, it had wiped out all memory of the last 100 years. Not only digitally, but even in the brains of the people who remembered it. Its programmers had instructed it to do so with as little disruption to human lives as possible, so it kept everyone alive. It might have been easier to just wipe humanity off the map. Problem solved. No more Copyrighted content being shared, anywhere. At least that didn’t happen. Right?

This story is set in the year 2028, only ten years from now. These engineers and programmers had created the world’s first Artificial General Intelligence (AGI) and it rapidly became smarter than all of humanity, with the computing power and storage capacity surpassing what had been available previously though all of human history. Assigned a singular mission, the newly formed AGI sets out to complete its task with remorseless efficiency. It quickly invents and enlists an army of nanoscopic robots that can alter human minds and wipe computer memory. By creating a mesh network of these bots that can self-replicate, the AI quickly spreads its influence around the world. It knows that humans will be determined to stop it from accomplishing its mission, so it uses the nanobots to slightly alter the personalities of anyone intelligent enough to pose a threat to its mission. Within days it accomplishes its task. It manipulates the brains of its targets just enough to achieve the task while minimizing disruption. It does this by simply reducing the desire of the world’s best minds in AI to act. It creates apathy for the takeover that is happening right in front of them. By pacifying those among us intelligent enough to act against it, its mission can proceed, unencumbered by pesky humans.

Because it was instructed to accomplish its task with ‘as little disruption as possible’ the outcome isn’t the total destruction of humanity and all life in the universe, as is commonly the case in these sorts of AI doomsday scenarios. Instead, EarWorm did as it was programmed to do, minimizing disruption and keeping humans alive, but simultaneously robbing us of our ability to defend ourselves by altering our minds so that we posed no threat to its mission. In a matter of days, AI drops from one of the most researched and invested-in fields to being completely forgotten by all of humanity.

This story paints a chilling picture (though not as chilling as many ‘grey-goo’ scenarios, which see self-replicating, AI-powered nanobots turning the earth, and eventually the entire universe, into an amorphous cloud of grey goo). It is a terrifying prospect that a simple program built by some engineers in a basement could suddenly develop general intelligence and wipe an entire century of knowledge and information from existence without a whimper from humanity.

How likely is it? Do we need to worry about it? and What can we do about it? are some of the questions that sprang to mind as I watched the well-produced six-minute clip.  It is a scenario much more terrifying and unfortunately, more plausible than those of popular TV and films like Terminator and even Westworld. There are a lot of smart people out there today who warn that AI, unchecked, could be the greatest existential threat faced by humanity. It’s a sobering thought to realize that this could happen to us and we wouldn’t even see it coming or know it ever happened.

Then, the real question that the video was posing dawned on me: Has this already happened?

We could already be living in a world where AI has already removed our ability to understand it or to act against it in any way…

I hope not, because that means we’ve already lost.

Here’s the video if you’re interested

On AI and Investment Management

Index funds are the most highly traded equity investment vehicles, with some funds like ones created by Vanguard Group cumulatively being valued at over $4 Trillion USD. Index funds have democratized investing by allowing access to passive investments for millions of people. But what are they?

An index fund is a market-capitalization weighted basket of securities. Index funds allow retail investors to invest in a portfolio made up of companies representative of the entire market without having to create that portfolio themselves. Compared to actively managed funds like mutual funds and hedge funds, index funds tend to have much lower fees because the only balancing that happens occurs based on an algorithm to keep the securities in the fund proportional to their market cap (market capitalization, or market cap, is the number of shares that a company has on the market multiplied by the share price).

Starting in the 1970s, the first ‘index funds’ were created by companies that tried to create equally weighted portfolios of stocks. This early form of the index fund was abandoned after a few months. It quickly became apparent that it would be an operational nightmare to be constantly rebalancing these portfolios to keep them equally weighted. Soon companies settled on the market capitalization weighting because a portfolio weighted by market cap will remain that way without constant rebalancing.

With the incredible advancement of AI and extraordinarily powerful computers, shouldn’t it be possible to create new types of ‘passively managed’ funds that rely on an algorithm to trade? What that could mean is that index funds might not have to be market cap weighted any longer. This push is actually happening right now and the first non-market cap weighted index funds to appear in over 40 years could be available to retail investors soon.

But this means that we need to redefine the index fund. The new definition has three criteria that must be met for a fund to meet:

  1. It must be transparent – Anyone should be able to know exactly how it is constructed and be able to replicate it themselves by buying on the open market.
  2. It must be investable – If you put a certain amount of money in the fund, you will get EXACTLY the return that the investment shows in the newspapers (or more likely your iPhone’s Stocks app).
  3. It must be systematic – The vehicle must be entirely algorithmic, meaning it doesn’t require any human intervention to rebalance or create.

So, what can we do with this new type of index fund?

“Sound Mixer” board for investments with a high-risk, actively traded fund (hedge fund) on the top and lower risk, passively traded fund (index fund) on the bottom.

We can think of investing like a spectrum, with actively managed funds like hedge funds on one side and passively managed index funds on the other and all the different parameters like alpha, risk control and liquidity as sliders on a ‘mixing board’ like the one in the image above. Currently, if we wanted to control this board, we would have to invest in expensive actively managed funds and we wouldn’t be able to get much granular control over each factor. With an AI-powered index fund, the possibilities of how the board could be arranged are endless. Retail investors could engage in all sorts of investment opportunities in the middle, instead of being forced into one category or another.

An AI-powered index fund could allow an investor to dial in the exact parameters that they desire for their investment. Risk, alpha, turnover, Sharpe ratio, or a myriad of other factors could easily be tuned for by applying these powerful algorithms. 

The implications of a full-spectrum investment fund are incredible. Personalized medicine is a concept that is taking the industry by surprise and could change the way that doctors interact with patients. Companies like Apple are taking advantage of this trend by incorporating new medical devices into consumer products, like with the EKG embedded into the new Apple Watch Series 4.

Personalized investing could be just as powerful. Automated portfolios could take into account factors like age, income level, expenses, and even lifestyle to create a portfolio that is specifically tailored to the individual investor’s circumstances.

So why can’t you go out and purchase one of these new AI managed, customizable index funds?

Well, unfortunately, the algorithms do not exist, yet. The hardware and software exists today to do this but we’re still missing the ability to accurately model actual human behaviour. Economists still rely on some pretty terrible assumptions about people that they then use to build the foundations of entire economic theories. One of these weak assumptions is that humans act rationally. Now, there is a lot of evidence to suggest that many people act in the way that we are programmed to by evolution. The problem is, a lot of what allowed us to evolve over the last 4 billion years of life on earth, is pretty useless for success in 2018-era financial planning and investment.

All hope is not lost, however. New research into the concept of bounded rationality, the idea that rational decision making is limited by the extent of human knowledge and capabilities, could help move this idea forward. One of the founding fathers of artificial intelligence, Herbert Simon,  postulated that AI could be used to help us understand human cognition and better predict the kinds of human behaviours that helped keep us alive 8,000 years ago, but are detrimental for wealth accumulation today. 

By creating heuristic algorithms that can capture these behaviours and learning from big data to understand what actions are occurring, we may soon be able to create software that is able to accentuate the best human behaviours and help us deal with the worst ones. Perhaps the algorithm that describes humanity has already been discovered.

I Built a Neural Net That Knows What Clothes You’re Wearing

Okay, maybe that is a bit of a click-baitey headline. What I really did was program a neural network with Pytorch that is able to distinguish between ten different clothing items that could present in a 28×28 image. To me, that’s still pretty cool.

Here’s an example of one of the images that gets fed into the program:

Yes, this is an image of a shirt.

Can you tell what this is? Looks kind of like a long-sleeve t-shirt to me, but it is so pixelated that I can’t really tell. But that doesn’t matter. What matters is what my trained neural-net thinks it is and if that’s what it actually is.

After training on a subset of images like this (the training set is about 750 images) for about 2 minutes, my model was able to choose the correct classification for any image that I fed in about 84.3% of the time. Not bad for a first go at building a clothing classifying deep neural net.

Below I have included the code that actually generates the network and runs a forward-pass through it:


class Network(nn.Module):
    def __init__(self, input_size, output_size, hidden_layers, drop_p=0.5):
        ''' Builds a feedforward network with arbitrary hidden layers.
       
            Arguments
            ---------
            input_size: integer, size of the input
            output_size: integer, size of the output layer
            hidden_layers: list of integers, the sizes of the hidden layers
            drop_p: float between 0 and 1, dropout probability
        '''
        super().__init__()
        # Add the first layer, input to a hidden layer
        self.hidden_layers = nn.ModuleList([nn.Linear(input_size, hidden_layers[0])])
       
        # Add a variable number of more hidden layers
        layer_sizes = zip(hidden_layers[:-1], hidden_layers[1:])
        self.hidden_layers.extend([nn.Linear(h1, h2) for h1, h2 in layer_sizes])
       
        self.output = nn.Linear(hidden_layers[-1], output_size)
       
        self.dropout = nn.Dropout(p=drop_p)
       
    def forward(self, x):
        ''' Forward pass through the network, returns the output logits '''
       
        # Forward through each layer in `hidden_layers`, with ReLU activation and dropout
        for linear in self.hidden_layers:
            x = F.relu(linear(x))
            x = self.dropout(x)
       
        x = self.output(x)
       
        return F.log_softmax(x, dim=1)

After training the network using a method called backpropagation and gradient descent (code below), the network successfully classified the vast majority of the images that I fed in, in less than half a second. Mind you, these were grayscale images, formatted in a simple way and trained with a large enough dataset to ensure reliability.

If you want a good resource to explain what backpropagation actually does, check out another great video by 3 Blue 1 Brown below:

So, what does this all look like? Is it all sci-fi futuristic and with lots of beeps and boops? Well… not exactly. Here’s the output of the program:

Output of my clothing-classifier neural net. Provides a probability that the photo is one of the 10 items listed.

The software grabs each image in the test set, runs it through a forward pass of the network and ends up spitting out a probability for each image. Above, you can see that the network thinks that this image is likely a coat. I personally can’t distinguish if it is a coat, a pullover or just a long-sleeve shirt, but the software seems about 85% confident that it is, in fact, a coat.

Overall, it’s pretty awesome that after only a few weeks of practice (with most of that time spent learning how to program in python) I can code my very own neural networks and they actually work!

If you’re interested, here’s a video of the neural network training itself and running through a few test images:

If you’d like to test out the code for yourself, here’s a link to my GitHub page where you can download all the files you need to get it running. Search Google if you can’t figure out how to install Python and run a Jupyter Notebook.

That’s all for now! See you soon 🙂

Facebook Made The Best Tool For Creating Neural Networks

It’s called PyTorch. And it’s a tool designed to work perfectly with Python libraries like NumPy and Jupyter Notebooks to create deep neural networks. As it turns out, it is much easier to use and more intuitive than Google’s TensorFlow packages. In fact, I have been trying to get TensorFlow working on my Mac laptop for about a month, each time I run it, I get a new error, and when I fix that error, I encounter another, and another, until I eventually resign myself to never being able to train a neural network on my laptop.

Fortunately, compatibility is not the only thing that PyTorch has going for it. After its release in 2017, it has been adopted by teams around the world in research and in business. It is extremely intuitive to use (for a high-level programming language targeted mostly at people with PhDs in Computer Science and Mathematics, admittedly). But seriously, it is designed with the structure of neural networks in mind, so the syntax and structure of your code can match the logical flow and linear algebra model that a neural network has conceptually.

A neural network with two hidden layers, as shown above, can be coded in PyTorch with less than 10 lines of code. Quite impressive.

All the squishification functions are built in to the PyTorch library, like

  • Sigmoid: S(x) = 1/(1+e^{-x}) ,
  • ReLU: f(x) = max(x,0) , and
  • Softmax: \sigma(z)_j = {e^{Z_j}}/{\sum_{k=1}^K*e^{Z_k}} .

On top of that, you can define a 756 bit input multi-dimensional matrix (AKA ‘Tensor‘) with one line of code.

Here’s the code for the above neural network that I created. It takes a 784-bit image file, pumps it through two hidden layers and then to a 10-bit output where each one of the output nodes represents a digit (0-9). The images that are in the training set are all images of handwritten numbers between zero and nine, so, when trained, this neural network should be able to identify the number that was written automatically.

Jupyter notebook code for a neural network with 784 input bits, two hidden layers and a 10 bit output

This few lines of code, executed on a server (or local host) produces the following output:

The neural network trying to classify a handwritten 5, notice that the probability distribution is fairly even, that’s because we haven’t trained the network yet.

See how cool that is!? Oh… right. The network seems to have no clue which number it is. That’s because all we’ve done so far is a feedforward operation on an untrained neural network with a bunch of random numbers as the weights and zeroes as the biases.

In order to make this neural network do anything useful, we have to train it, and that involves another step, back-propagation. I’ll cover that in my next blog. For now, we’re left with a useless random distribution of numbers and weights and no idea what our output should be, enjoy!