Fundamentals of Machine Learning - Support Vector Machines Made Easy
Verlag | UTB |
Auflage | 2020 |
Seiten | 155 |
Format | 19,6 x 26,4 x 1,0 cm |
Gewicht | 412 g |
Artikeltyp | Englisches Buch |
Reihe | UTB Uni-Taschenbücher 5251 |
ISBN-10 | 3825252515 |
EAN | 9783825252519 |
Bestell-Nr | 82525251A |
Artificial intelligence will change our lives forever - both at work and in our private lives. But how exactly does machine learning work? Two professors from Lübeck explore this question. In their English textbook they teach the necessary basics for the use of Support Vector Machines, for example, by explaining linear programming, the Lagrange multiplier, kernels and the SMO algorithm. They also deal with neural networks, evolutionary algorithms and Bayesian networks.Definitions are highlighted in the book and tasks invite readers to actively participate. The textbook is aimed at students of computer science, engineering and natural sciences, especially in the fields of robotics, artificial intelligence and mathematics.
Kurzbeschreibung:
Künstliche Intelligenz begreifen
Künstliche Intelligenz wird unser Leben nachhaltig verändern - sowohl im Job als auch im Privaten. Doch wie funktioniert maschinelles Lernen eigentlich genau? Dieser Frage gehen die Autoren in ihrem englischsprachigen Lehrbuch nach. Sie vermitteln die notwendigen Grundlagen für den Einsatz von Support Vector Machines beispielsweise durch die lineare Programmierung, den Lagrange-Multiplikator, Kernels und den SMO-Algorithmus. Auch auf Neuronale Netze, evolutionäre Algorithmen und Bayessche Netze gehen sie ein. Definitionen sind im Buch hervorgehoben und Aufgaben laden die LeserInnen zum Mitdenken ein. Das Lehrbuch richtet sich an Studierende der Informatik, Technik und Naturwissenschaften, insbesondere aus den Bereichen Robotik, Artificial Intelligence und Mathematik.
Inhaltsverzeichnis:
ContentsPreface1 Symbolic Classification and Nearest Neighbour Classification1.1 Symbolic Classification1.2 Nearest Neighbour Classification2 Separating Planes and Linear Programming2.1 Finding a Separating Hyperplane2.2 Testing for feasibility of linear constraints2.3 Linear ProgrammingMATLAB example2.4 Conclusion3 Separating Margins and Quadratic Programming3.1 Quadratic Programming3.2 Maximum Margin Separator Planes3.3 Slack Variables4 Dualization and Support Vectors4.1 Duals of Linear Programs4.2 Duals of Quadratic Programs4.3 Support Vectors5 Lagrange Multipliers and Duality5.1 Multidimensional functions5.2 Support Vector Expansion5.3 Support Vector Expansion with Slack Variables6 Kernel Functions6.1 Feature Spaces6.2 Feature Spaces and Quadratic Programming6.3 Kernel Matrix and Mercer's Theorem6.4 Proof of Mercer's TheoremStep 1 - Definitions and PrerequisitesStep 2 - Designing the right Hilbert SpaceStep 3 - The reproducing property7 The SMO Algorithm7.1 Overview and Princi ples7.2 Optimisation Step7.3 Simplified SMO8 Regression8.1 Slack Variables8.2 Duality, Kernels and Regression8.3 Deriving the Dual form of the QP for Regression9 Perceptrons, Neural Networks and Genetic Algorithms9.1 PerceptronsPerceptron-AlgorithmPerceptron-Lemma and ConvergencePerceptrons and Linear Feasibility Testing9.2 Neural NetworksForward PropagationTraining and Error Backpropagation9.3 Genetic Algorithms9.4 Conclusion10 Bayesian Regression10.1 Bayesian Learning10.2 Probabilistic Linear Regression10.3 Gaussian Process Models10.4 GP model with measurement noiseOptimization of hyperparametersCovariance functions10.5 Multi-Task Gaussian Process (MTGP) Models11 Bayesian NetworksPropagation of probabilities in causal networksAppendix - Linear ProgrammingA.1 Solving LP0 problemsA.2 Schematic representation of the iteration stepsA.3 Transition from LP0 to LPA.4 Computing time and complexity issuesReferencesIndex