Introduction to Deep Learning: From Logical Calculus to Artificial Intelligence

advertisement

Introduction to Deep Learning: From Logical Calculus to Artificial Intelligence

Introduction



    Artificial intelligence has it roots in two philosophical ideas of Gottfried Leibniz, the great seventeenth-century philosopher and mathematician, viz. the characteristic universalis and the calculus ratiocinator. The characteristic universalis is an idealized language, in which all of science could in principle be translated. It would be language in which every natural language would translate, and as such it would be the language of pure meaning, uncluttered by linguistic technicalities. This language can then serve as a background for explicating rational thinking, in a manner so precise, a machine could be made to replicate it. The calculus ratiocinator would be a name for such a machine. 

advertisement

    There is a debate among historians of philosophy whether this would mean making a software or a hardware, but this is in fact a insubstantial question since to get the distinction we must understand the concept of an universal machine accepting different instructions for different tasks, an idea that would come from Alan Turing in 1936  (we will return to Turing shortly), but would become clear to a wider scientific community only in the late 1970s with the advent of the personal computer. The ideas of the characteristic universalis and the calculus ratiocinator are Leibniz’ central ideas, and are scattered throughout his work, so there is no single point to reference them, but we point the reader to the paper, which is a good place to start exploring.

Artificial Intelligence Frameworks and Functionality for Deep Learning

advertisement

Table Of Content

1 From Logic to Cognitive Science

1.1 The Beginnings of Artificial Neural Networks

1.2 The XOR Problem

1.3 From Cognitive Science to Deep Learning

1.4 Neural Networks in the General AI Landscape

1.5 Philosophical and Cognitive Aspects

2 Mathematical and Computational Prerequisites

2.1 Derivations and Function Minimization

2.2 Vectors, Matrices and Linear Programming

2.3 Probability Distributions

2.4 Logic and Turing Machines

2.5 Writing Python Code

2.6 A Brief Overview of Python Programming

3 Machine Learning Basics

3.1 Elementary Classification Problem

3.2 Evaluating Classification Results

3.3 A Simple Classifier: Naive Bayes

3.4 A Simple Neural Network: Logistic Regression

3.5 Introducing the MNIST Dataset

3.6 Learning Without Labels: K-Means

3.7 Learning Different Representations: PCA

3.8 Learning Language: The Bag of Words Representation

4 Feedforward Neural Networks

4.1 Basic Concepts and Terminology for Neural Networks

4.2 Representing Network Components with Vectors and Matrices

4.3 The Perceptron Rule

4.4 The Delta Rule

4.5 From the Logistic Neuron to Backpropagation

4.6 Backpropagation

4.7 A Complete Feedforward Neural Network

5 Modifications and Extensions to a Feed-Forward Neural Network

5.1 The Idea of Regularization

5.2 L1 and L2 Regularization

5.3 Learning Rate, Momentum and Dropout

5.4 Stochastic Gradient Descent and Online Learning

5.5 Problems for Multiple Hidden Layers: Vanishing and Exploding Gradients

Download full PDF in Comment section

advertisement

2 Comments

Previous Post Next Post