Summary
Python will be a general-purpose programming language with many excellent features, such as being easy to learn, easy to write readable code, and usable for a wide range of applications Python was developed by Guido van Rossum in 1991.
As a relatively new language, Python can utilize a variety of effective programming techniques, including object-oriented programming, procedural programming, and functional programming. It is also widely used in web applications, desktop applications, scientific and technical computing, machine learning, artificial intelligence, and other fields because of the many libraries and frameworks available. Furthermore, it is cross-platform and runs on many operating systems such as Windows, Mac, and Linux, etc. Because Python is an interpreted language, it does not require compilation and has a REPL-like structure, which speeds up the development cycle.
This section discusses example implementations of Python code based on “Python Basic & Practical Programming [Skills for Professionals + Project Samples]”.
In this article, I will discuss the installation and reading notes of pyhton.
Python Basic & Practical Programming
A simple way to start using python is to install Python, which may be installed from scratch on some systems. If you are using macOS or Unix (including Linux), open “Terminal”, type pythpon3 (or python), and press Enter or Return.
If you are using Windows, open “Command Prompt” or “Te ni mishirate to sera teisutoku iriri” and type python3 (or python) in the same way, then press Enter.
If python is installed, a message with the version and other information will be displayed, and a “prompt” saying “>>” will appear.
If you do not see the message and prompt, you will need to install python. To do so, go to https://www.python.org to download and install the installer.
For actual programming, please refer to the above or the following reference books.
This book is divided into three main parts: Part 1: Grammar and Basic Techniques, Part 2: Techniques for Different Purposes, and Part 3: Practical! Development Projects”. The first part literally describes the basic grammar, etc., and can be used as a dictionary for individual development, while the second part describes code that can be used as a reference for actual use. The second part contains reference code for actual use, and the last part is for individual applications.
Chapter 1 Giving Computers the "Ability to Learn from Data 1.1 "Intelligent machines" that return data to Shunju 1.2 Three Types of Machine Learning 1.3 Predicting the Future with "Supervised Learning 1.3.1 Classification to predict class level 1.3.2 Knowledge for Predicting Continuous Values 1.4 Solving Dialogue Problems with Reinforcement Learning 1.5 Discovering Hidden Structures through "Unsupervised Learning 1.5.1 Discovering Groups by Clustering 1.5.2 Dimensionality Reduction for Data Compression 1.6 Basic Terminology and Notation 1.7 Roadmap for Building a Machine Learning System 1.8 Preprocessing: Data Formatting 1.8.1 Training and Selecting a Predictive Model 1.8.2 Evaluating the Model and Predicting Unknown Instances 1.9 Using Python for machine learning 1.9.1 Installing the Python package Summary Chapter 2: Classification Problems - Training Machine Learning Algorithms 2.1 Artificial Neurons - A Prehistory of Machine Learning 2.2 Implementing the Perceptron training algorithm in Python 2.3 Training a Perceptron Model on the Iris Dataset 2.4 ADALINE and Convergence of Learning 2.5 Minimizing the Cost Function Using Gradient Descent 2.5.1 Implementing ADALINE in Python 2.6 Large-Scale Machine Learning and Stochastic Gradient Descent Conclusion Chapter 3 Classification Problems - Using the Machine Learning Library scikit-learn 3.1 Selecting a Classification Algorithm 3.2 First Steps to Using scikit-learn 3.2.1 Training the Perceptron with scikit-learn 3.3 Modeling Class Probabilities Using Logistic Regression 3.3.1 Intuitive Knowledge of Logistic Regression and Conditional Probability 3.3.2 Learning the weights of the logistic function 3.3.3 Training a Logistic Regression Model with scikit-learn 3.3.4 Dealing with Overtraining by Regularization 3.4 Maximum Margin Classification with Support Vector Machines 3.4.1 Understanding the Maximum Margin Intuitively 3.4.2 Dealing with nonlinear separable cases using slack variables 3.4.3 Alternative implementations of scikit-learn 3.5 Solving Nonlinear Problems Using Kernel SVMs Identifying separating hyperplanes in high-dimensional space using kernel tricks 3.6 Decision Tree Learning 3.6.1 Maximizing Information Gain : Achieving the Highest Efficiency Possible 3.6.2 Constructing a Decision Tree 3.6.3 Combining Weak and Strong Learning Algorithms Using Random Forest 3.7 K-Nearest Neighbor Method Lazy Learning Algorithm Conclusion Chapter 4 Data Preprocessing Building a Better Training Set 4.1 Dealing with Missing Data 4.1.1 Remove samples/features with missing values 4.1.2 Completing missing values 4.1.3 scikit-learn's estimator API 4.2 Processing categorical data 4.2.1 Mapping ordinal features 4.2.2 Class Label Encoding 4.2.3 One-hot Encoding with Name Features 4.2 Splitting the Dataset into Training and Test Datasets 4.4 Scaling the Features 4.5 Selecting Beneficial Features 4.5.1 Sparse solution by L1 regularization 4.5.2 Sequential Feature Selection Algorithm 4.6 Accessing the Importance of Random Forest Features Conclusion Chapter 5 Compressing Data with Dimensionality Reduction 5.1 Unsupervised Dimensionality Reduction by Principal Component Analysis 5.1.1 Finding the Eigenvalues of the Covariance Matrix 5.1.2 Feature Transformation 5.1.3 Principal Component Analysis in scikit-learn 5.2 Supervised Data Compression with Linear Discriminant Analysis 5.2.1 Calculating the Variation Matrix 5.2.2 Feature Transformation 5.2.3 Projecting New Data 5.2.4 LDA with sci-kit-learn 5.3 Nonlinear Mapping with Kernel Principal Component Analysis 5.3.1 Kernel functions and kernel tricks 5.3.2 Implementing kernel principal component analysis in Python 5.3.3 Projecting a new data point 5.3.4 Kernel Principal Component Analysis in scikit-learn Conclusion Chapter 6 Best Practices for Model Evaluation and Hyperparameter Tuning 6.1 Pipelining for Workflow Efficiency 6.1.1 Loading the Breast Cancer Wisconsin Dataset 6.1.2 Combining Transducers and Estimators in a Pipeline 6.2 Evaluating Model Performance Using k-Fractional Cross-Validation 6.2.1 Holdout Method 6.2.2 k-decomposition cross-validation 6.3 Validating the Algorithm with Learning and Verification Curves 6.3.1 Using Learning Curves to Diagnose Bias and Variance Problems 6.3.2 Using Verification Curves to Reveal Over-Learning and Under-Learning 6.4 Tuning Machine Learning Models with Grid Research 6.4.1 Using Grid Search to Tune Hyperparameters 6.4.2 Validating Algorithms by Nested Cross-Verification 6.5 Various Performance Metrics 6.5.1 Interpreting the Confusion Matrix 6.5.2 Optimizing the Mixing and Reproduction Rates of Classification Models 6.5.3 Plotting the ROC Curve 6.5.4 Performance Metrics for Other Classifications Conclusion Chapter 7 Ensemble Learning - Combining Different Models 7.1 Learning with an Ensemble 7.2 Implementing a Simple Majority Classifier 7.2.1 Combining Majority-Based Classification Algorithms 7.3 Evaluating and Tuning an Ensemble Classifier 7.4 Bagging: Building a Classifier Ensemble with Bootstrap Samples 7.5 Adaboosting a Weak Learner Conclusion Translated with www.DeepL.com/Translator (free version)
て
コメント