What is Cloud Computing?
Cloud computing is a computing model that provides resources and services via the Internet. It is characterized by the use of computing resources and data by accessing remote servers rather than the traditional on-premise local environment. Cloud computing has the following characteristics
- On-demand resource delivery: users can access as many computing resources (processing power, storage, network, etc.) as they need, when they need them. This allows scaling as needed.
- Use as a shared resource: Resources are shared by multiple users. This reduces costs and improves efficiency.
- Access via the Internet: This is a form of access to cloud services via the Internet. Users can connect to cloud resources using specialized client software or web browsers.
- Self-managed resources: The cloud provider manages the hardware and infrastructure, allowing users to focus on their applications and data. Therefore, the management and maintenance of the resources is done by the provider.
With these characteristics, cloud computing offers the following advantages
- Scalability: On-demand scaling of cloud resources allows for flexibility in responding to increases and decreases in demand.
- Flexibility and accessibility: Resources can be accessed via the Internet, making them location- and device-independent.
- Cost savings: Cloud services typically use a pay-as-you-go billing model, allowing you to pay only for what you need. This also reduces infrastructure and hardware management costs.
- Efficient use of resources: Resources are used as shared resources, allowing for more efficient use of resources.
Typical cloud providers include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). These providers offer cloud services at various levels, including infrastructure, platform, and software.
About AWS
AWS (Amazon Web Services) is a cloud computing platform provided by Amazon.com. The following is a description of the main features and services offered by AWS.
- Scalability and Flexibility: AWS provides a cloud-based infrastructure that allows for rapid scaling of resources to meet demand. This allows for flexibility to respond to business growth and traffic fluctuations.
- Rich Services: AWS offers hundreds of different services that can be used for a wide range of areas including computing, storage, databases, networking, security, artificial intelligence, machine learning, and IoT. Typical services include Amazon EC2 (virtual servers), Amazon S3 (object storage), Amazon RDS (relational database service), and Amazon Lambda (serverless computing).
- Global Infrastructure: AWS has data centers around the world and provides services in numerous regions and availability zones. This allows for optimized performance and availability by utilizing data centers that are geographically close to each other.
- Security: AWS takes security very seriously and offers multi-layered security measures. Data encryption, access control, and DDoS attack protection are built in to ensure a high level of security.
- Payment Model: AWS uses a pay-as-you-go billing model where you are only charged for the resources you use. This allows users to pay only for the resources they need. AWS also has a flexible fee structure and offers a variety of options and discount plans.
Typical AWS implementation patterns
AWS offers a variety of implementation patterns. Some typical implementation patterns are described below.
- Scalable Web Application: This pattern is for building web applications, using Elastic Load Balancer (ELB) to distribute traffic and Auto Scaling groups to automatically increase or decrease the number of instances. Scalable applications will be built, and it will also be common to use Amazon RDS to manage databases and Amazon S3 to store static content.
- Data processing and analysis: this will be the pattern for big data processing and analysis, using the AWS service Amazon EMR (Elastic MapReduce) to run Hadoop and Spark to process large amounts of data, and Amazon Redshift to data warehouses, and use Amazon Athena and Amazon QuickSight to analyze and visualize data.
- Serverless architecture: this would be a pattern for building a serverless architecture, using AWS Lambda to execute application code and exposing API endpoints using API Gateway. This allows the application to run without provisioning or managing servers.
- Fault Tolerance and High Availability: This pattern is designed to ensure system fault tolerance and high availability. Resources are deployed in multiple availability zones to ensure redundancy, and Amazon Route 53, an AWS service, is used for DNS record failover and traffic routing.
These are examples of some implementation patterns, and AWS can combine different patterns and services according to use cases and requirements. By referring to best practices such as the AWS Well-Architected Framework, it is also possible to optimize the implementation in terms of security, performance, cost-effectiveness, and other aspects.
The following describes some of these implementation patterns for web applications.
Implementation of web applications with AWS
<Implementation Patterns>
The following are specific implementation examples of web applications using AWS.
- Combination of EC2 and RDS: The most orthodox pattern is to use Amazon EC2 (Elastic Compute Cloud) to start a web server and Amazon RDS (Relational Database Service) to build a database. The web application code is deployed to the EC2 instance, and the EC2 and RDS are connected via VPC (Virtual Private Cloud). Users access the web application using the public IP address of the EC2 instance.
- Deployment using Elastic Beanstalk: Deploy the web application using AWS Elastic Beanstalk. Elastic Beanstalk automates the platform building and deployment of the application. The developer specifies the code for the application and Elastic Beanstalk automatically builds the environment and handles scaling and load balancing.
- Build a serverless architecture: Build a serverless web application using AWS Lambda and Amazon API Gateway; Lambda functions handle requests from the API Gateway and execute back-end business logic, Lambda functions use only the resources they need and scale automatically; API Gateway receives requests from clients and routes them to Lambda functions.
- Static Web Hosting: Host a static website using Amazon S3 (Simple Storage Service). In this configuration, website files (HTML, CSS, JavaScript, images, etc.) are uploaded to an S3 bucket, the bucket is set to public, and users access the website using the S3 bucket’s endpoint URL.
<Implementation Procedure Using EC2/RDS>
The following is an overview of the implementation procedure for an orthodox web application using EC2/RDS in AWS.
- Define requirements: Clearly define the requirements for the web application. This includes consideration of required functionality, database design, security requirements, scalability requirements, etc.
- Create an AWS account: Go to the official AWS website and create an AWS account. After creating an account, you will be able to access AWS services.
- Select a service: Select an appropriate AWS service based on your requirements. Typically, Amazon EC2 (web server), Amazon RDS (database), Amazon S3 (static content hosting), and Amazon Route 53 (domain management) are used.
- Network Design: Design the network infrastructure for the web application; create a network environment using Amazon VPC (Virtual Private Cloud) and configure public and private subnets.
- Server Setup: Create an EC2 instance and setup the web server. Select an appropriate operating system (Linux, Windows, etc.) for the instance and install the necessary software and libraries.
- Database Setup: Create a database using RDS and create the necessary tables and schemas; RDS typically supports database engines such as MySQL, PostgreSQL, Amazon Aurora, etc.
- Deploying the application: deploying the web application code and files to the EC2 instance. Typically, Git and file transfer protocols (SFTP, SCP, etc.) are used to transfer files.
- Security Settings: To secure your web application, configure the appropriate security groups, access control lists (ACLs), encryption, etc. It is also important to encrypt communications using HTTPS (SSL/TLS).
- Configure the domain: Register a domain using Amazon Route 53 and set up a domain name for the web application. Also, configure DNS records to associate the domain name with the EC2 instance and load balancer.
- Scaling and Monitoring: Configure scaling and performance monitoring for your web application, including configuring auto-scaling of instances using Auto Scaling and using CloudWatch to monitor metrics and set alerts.
The details of these steps are described in “Introduction to Amazon Web Services Networking (2) – Creating a VPC Region and Subnet,” “Introduction to Amazon Web Services Networking (3) Overview and Deployment of EC2 Instances,” “Introduction to Amazon Web Services Networking (4) Connecting and Checking Instances to the Internet“.
<Implementation Procedure Using AWS Elastic Beanstalk>
AWS Elastic Beanstalk will be a managed application deployment platform that allows developers to focus on application deployment and scaling. Below are the key features and benefits of AWS Elastic Beanstalk.
- Easy deployment: Elastic Beanstalk automates the application deployment process. Developers can simply upload their application code and do not need to worry about configuring back-end resources or setting up the platform.
- Managed Environment: Elastic Beanstalk allows AWS to manage the infrastructure and resources required to run the application. Developers can focus on the application code.
- Automatic scaling: Elastic Beanstalk can automatically scale according to the application load. The number of instances can be increased as traffic increases and decreased as traffic decreases.
- Multiple Platform Support: Elastic Beanstalk supports multiple programming languages and frameworks: Java, .NET, Node.js, Python, Ruby, PHP, and many others.
- Application Health Monitoring: Elastic Beanstalk provides application health monitoring and logging. This will monitor application performance and errors and send alerts when necessary.
- Support for custom configuration: Elastic Beanstalk supports custom configuration of applications, such as setting environment variables and database connections. This gives developers the flexibility to configure as needed.
The following describes the steps to implement a web server using AWS Elastic Beanstalk.
- Create or login to an AWS account.
- Access the AWS Management Console.
- Create Elastic Beanstalk:
- Go to the Elastic Beanstalk service page and click “Create Application”.
- Enter a name for your application and select a platform (e.g. Node.js, Python, Java, etc.).
- Optionally add a description and tags for your application.
- Create the application.
- Creating the environment:
- On the Application Overview page, click “Create Environment.
- Enter an environment name and select a domain name (you can optionally set a custom domain).
- Select the platform version.
- Optionally set the environment type (Web server environment, worker environment, etc.) and EC2 instance type.
- Optionally add environment settings (environment variables, database, etc.).
Create the environment.
- Deploy the application:
- On the Elastic Beanstalk environment page, click “Upload Application Version”.
- Specify the application version and upload the source bundle (ZIP file or Docker image).
- Optionally set the deployment configuration (environment variables, database, etc.).
- Start deployment.
- Deployment Confirmation:
- If the deployment completes successfully, the environment status will be “Green” (normal) on the Elastic Beanstalk environment page.
- Access the application endpoint with a browser to check the operation.
Next, we will discuss the serverless approach.
<Procedure for implementing serverless architecture using Lambda + API Gateway>
This pattern maximizes the benefits of AWS serverless architecture. By using Lambda for code execution, the application can be executed without having to prepare servers such as EC2 in advance and without having to manage them. In addition, since the billing system is based on the actual number of executions, costs can be kept quite low for small to medium-sized projects. Furthermore, when combined with API Gateway, the API can be executed from the client and published as a Web service.
An overview of the implementation procedure is shown below.
- Create or login to an AWS account.
- Access the AWS Management Console.
- Create a Lambda function:
- Go to the Lambda service page and click “Create Function.
- Select the appropriate runtime (Python, Node.js, Java, etc.).
- Enter a function name and select an execution role (create a new role if necessary).
- Enter the function code in the code editor.
- Add environment variables and other settings if necessary.
- Specify the handler function (entry point).
- Save the function.
- Creating API Gateway:
- Go to the API Gateway service page and click “Create API”.
- Select “REST API”.
- Choose whether to create a new API or use an existing API.
- Define resources and methods.
- Enable Lambda proxy integration in the method settings.
- Select the appropriate Lambda function
- Create the API deployment.
- Test the API:
- Access the deployed API endpoint.
- Send a request to verify that the Lambda function is working properly.
- Configure a domain name if necessary.
- Configure a custom domain name on the API Gateway configuration page.
- Purchase and configure a domain name and associate it with API Gateway.
The above completes the web server setup using AWS Lambda and API Gateway, where the Lambda function handles requests and API Gateway handles the mapping between requests and responses.
<Example of a serverless architecture implementation using AWS Fargate>
AWS Fargate is a powerful tool that provides flexibility and simplicity in the deployment of microservice architectures and container-based applications. Fargate features include
- Serverless: Fargate is a serverless architecture, meaning that AWS manages the infrastructure. This allows you to focus on container deployment and scaling.
- Resource isolation: Each task (container) is allocated independent resources. This avoids resource conflicts and impacts between tasks.
- Flexible scaling: Fargate supports auto-scaling, automatically adjusting the number of containers as requests increase or decrease.
- Container Image Support: Fargate supports Docker container images, allowing you to create containers with your favorite tools and frameworks.
- Platform choice: Fargate can be used with several container orchestration platforms, including Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Security and Compliance: Fargate is a secure, secure, and secure container orchestration platform.
- Security and Compliance: Fargate is compliant with the AWS security and compliance framework to protect and secure your data.
The following describes the steps to implement a web server using AWS Fargate.
- Create or login to your AWS account.
- Access the AWS Management Console.
- Create a container image:
- Create a container image of the web server using Docker.
- Create a Dockerfile and include the required software and configuration.
- Build the container image locally and push it to a Docker image registry (e.g. ECR).
- Create task definition:
- Go to the Amazon ECS (Elastic Container Service) service page and click “Create Task Definition”.
- Enter a task definition name and configure the container details.
- Specify the container image and configure port mapping, environment variables, etc.
- Save the task definition.
- Creating a cluster:
- On the Amazon ECS service page, click “Create Cluster”.
- Enter a name for your cluster.
- Optionally configure network settings such as VPC, subnets, security groups, etc.
- Create the cluster.
- Create the service:
- On the Amazon ECS Services page, click on “Create Service”.
- Enter a service name and select a task definition.
- Configure the service settings (number of tasks, auto-scaling, etc.).
- Configure the load balancer settings and define the listener rules.
- Create the service.
- Check the service:
- If the service starts successfully, the task is executed and the web server runs on Fargate.
- Access the load balancer endpoint to verify its operation.
Of these approaches, the recommended basic flow is to start by considering the use of Lambda, and if that is difficult, consider the ECS or EC2 pattern. However, since it is important to select and use services that meet the requirements, it is not necessarily necessary to adopt these patterns, and it is important to use the right combination of the right people for the right job and as needed.
Finally, we describe the procedure for building a server using static web hosting with Amazon S3.
<Procedures for building a server with static web hosting using Amazon S3>
S3 (Simple Storage Service) is a simple and scalable static web hosting solution used for content distribution and website hosting.
- Create or login to your AWS account.
- Access the AWS Management Console.
- Create an S3 bucket:
- Go to the S3 service page and click “Create Bucket”.
- Enter a bucket name. The bucket name must be globally unique.
- Select a region.
- Optionally configure public access settings.
- Create the bucket.
- Configure bucket properties:
- Select the created bucket and edit its properties.
- Enable static web hosting.
- Optionally specify an index document (e.g. index.html)
- Optionally specify an error document (e.g. error.html).
- Save changes.
- Upload Files:
- Add static web content (e.g., HTML, CSS, JavaScript, images) that you want to upload to the bucket.
- Uploading methods include AWS Management Console, AWS CLI, and AWS SDK.
- Permission Settings:
- Set bucket permissions to control public access.
- Access control can also be defined using bucket policies.
- Publish Website: Use the bucket’s endpoint URL to publish the website.
- Use the bucket’s endpoint URL to access the published website.
- Opening the bucket’s endpoint URL in a browser will display static web content.
Example of non-web server application implementation with AWS
The following are specific examples of non-web server application implementations using AWS.
- Building data pipelines: Data pipelines can be built using AWS Glue, which automates data ETL (Extract, Transform, Load), cleaning and transforming data, and moving data between different data stores. In addition, it can be combined with storage services such as Amazon S3 and Amazon Redshift to build data warehouses and analytical environments.
- Serverless function execution: serverless functions can be executed using AWS Lambda, which executes functions in response to triggers and handles scaling and management automatically. For example, it can be combined with API Gateway to create RESTful APIs or to resize and process images stored in Amazon S3.
- Training and Inference of Machine Learning Models: You can use Amazon SageMaker to train and infer machine learning models; SageMaker simplifies the machine learning workflow, including data preprocessing, tuning hyperparameters, and deploying models. It can also be combined with other AWS services to handle large data sets and real-time inference.
The details are described below.
Procedures for implementing training and inference of machine learning models with AWS
The following is a brief description of the steps to implement training and inference of machine learning models with AWS.
- Data Preparation: Prepare the data for training the machine learning model. Here, the dataset may be collected, preprocessed and feature engineered. The dataset is uploaded to an AWS storage service such as S3.
- Model definition: Define the algorithm and architecture of the machine learning model to be trained. Model selection and parameterization here depends on the nature of the problem. Typically, Amazon SageMaker will be used to train and deploy the models.
- Create a SageMaker notebook: Create a SageMaker notebook instance and create a notebook for training. The notebook will perform tasks such as data visualization, preprocessing, model training, and evaluation. Code the notebook using Python or Jupyter Notebook.
- Configure a training job: Use the SageMaker console or API to configure a training job. The training job specifies the type and number of instances to use, location of training data, hyperparameter settings, etc.
- Train the model: Run a training job to train the model; SageMaker will automatically provision the necessary resources and distribute the data to process and train the model. Training progress and logs can be viewed in a notebook.
- Deploy the model: Once training is complete, deploy the model by creating a SageMaker endpoint to host the model. The endpoint provides an API for inference, allowing you to make predictions in real time.
- Perform inference: send an inference request to the deployed model to retrieve predictions; make an API request to the SageMaker endpoint and receive the results. Inference can be performed in real-time, or inference can be performed on large amounts of data using batch conversion jobs.
Implementation steps for building a data pipeline with AWS
This section describes the implementation steps for building a data pipeline with AWS. A data pipeline is a mechanism to automate a series of processes such as data collection, transformation, processing, and storage.
- Determine the source and target of the data: Determine the source and target of the data to be handled in the data pipeline. Source data is typically obtained from databases, log files, APIs, etc. Target data is typically stored in data warehouses, data lakes, storage services, etc.
- Data collection: establish a mechanism to collect the data. This can be done using AWS services such as Amazon Kinesis, Amazon SQS, AWS DataSync, etc. Receive data from data sources and store it in temporary storage (e.g., Amazon S3).
- Data transformation and processing: convert the collected data into the required format and perform the necessary processing. Services such as AWS Glue, AWS Lambda, and Amazon EMR may be used to perform data transformation, data cleansing, aggregation, and merging.
- Data Storage and Management: Store transformed and processed data in appropriate storage, perform data versioning and access management, etc. Services such as Amazon S3, Amazon RDS, and Amazon Redshift are typically used.
- Scheduling and Automation: scheduling and workflow orchestration tools are used to automate each step of the data pipeline. This is done using AWS services such as Amazon CloudWatch Events, AWS Step Functions, and Amazon Data Pipeline.
- Monitoring and error handling: monitor the normal operation of the data pipeline and handle errors when they occur; monitor logs and metrics using CloudWatch Logs and CloudWatch Alarms; set up notifications and automatic recovery processes when errors occur. Set up notifications and automatic recovery processes in the event of errors.
Serverless function execution with AWS
Serverless function execution with AWS is done using a service called AWS Lambda, which is a service for executing code in a serverless environment that is highly scalable and cost effective. The steps for serverless function execution with AWS are described below.
- Create a function: Create a Lambda function using the AWS Management Console or AWS CLI. Select the language and runtime for the function, and upload or directly type in the function code.
- Set triggers: Set triggers to execute Lambda functions. Triggers can be triggered by various AWS services and events, such as API Gateway, S3 events, CloudWatch events, and DynamoDB streams. When a function is triggered, Lambda is automatically executed.
- Function Configuration: Configure the Lambda function. You can configure the function’s memory size, timeout, environment variables, access control, VPC connection, and other settings. This allows control over function performance and security.
- Execution and Monitoring: Lambda functions are automatically executed by triggering events. Function execution logs and metrics can be viewed in CloudWatch Logs and CloudWatch metrics. This allows you to monitor function behavior and troubleshoot or optimize as needed.
- Scaling and Cost Optimization: Lambda functions are automatically scaled and resources are allocated based on the number of triggering events and load. This allows for high scalability and cost efficiency. In addition, resource allocation and cost optimization can be configured according to the performance requirements of the function.
コメント