Cloud Computing Technology

Machine Learning Artificial Intelligence ICT Technology Semantic Web Search Technology DataBase Technology Digital Transformation MicroServices Technology Network Technology Server Technology Navigation of this blog

Cloud Computing Technology

Cloud computing will be a service that provides computing resources using multiple computers on a network through the Internet. This service allows users to access the computing resources they need, when they need them, from any location with an Internet connection.

Cloud computing has the following advantages

  1. Scalability: Computing resources can be added on an as-needed basis, as needed basis.
  2. Cost-efficiency: Cloud computing tends to be cost-effective because it provides as many resources as needed, when needed.
  3. Mobility: It is more convenient for users because they can access as many resources as they need, when they need them, from any location with an Internet connection.
  4. Security: Some cloud computing services use encryption technology to ensure data security.

On the other hand, cloud computing has the following disadvantages

  1. Companies do not have the authority to manage hardware: As mentioned above, when using cloud computing services, companies use ICT resources provided by the service provider. The service provider is responsible for various hardware settings and troubleshooting, and the user company does not have the authority.
  2. Low degree of freedom for customization: Because companies do not have the authority to manage hardware, they do not have the authority to make decisions regarding ICT resources and service content, and in some cases, they may not be able to receive the services they need for their business.
  3. Performance is dependent on other users : In most cases, the vendor of cloud services contracts with multiple users using a single server or network, and if one user performs a heavy workload, the performance of other users may be affected.

There are four main models of cloud computing

  1. Software-as-a-Service (SaaS): A service that provides applications and software; typical SaaS-based cloud services include “Gmail” and “GoogleWorkspace,” as well as various map services and transit information services.
  2. Platform services (PaaS): Services that provide tools and platforms for developing applications; typical PaaS-based services include Google App Engine, Microsoft Azure, and IBM Cloud.
  3. Infrastructure services (IaaS): Services that provide virtualized infrastructure and resources such as virtual machines, storage, and networks. and Google Compute Engine.
  4. Hardware services (HaaS): Services that enable the use of hardware via the Internet. Specifically, virtual servers are provided over the Internet, and virtual servers can be selected from “CPU,” “memory,” “storage capacity,” etc., according to the purpose.

This blog discusses this cloud technology in more detail below.

Technical Details

Cloud computing is a computing model that provides resources and services via the Internet. AWS (Amazon Web Services) is a cloud computing platform provided by Amazon.com. AWS offers a variety of implementation patterns. Some typical implementation patterns and their procedures are described below.

Parallel distributed processing in machine learning is a process that distributes data and calculations across multiple processing units (CPUs, GPUs, computer clusters, etc.) and simultaneously processes them to reduce processing time and improve scalability, and plays an important role when processing large data sets and complex models. It plays an important role in processing large data sets and complex models. This section describes concrete implementation examples of parallel distributed processing in machine learning in on-premise/cloud environments.

  • Operating System Overview and Linux OS (CentOS,Ubuntu) Environment Settings

An operating system (Operating System) is a basic piece of software in a computer system that provides an interface between hardware and application software. The operating system manages the computer’s resources (processor, memory, disk space, etc.) and provides an environment for the user to interact with the computer system.

This section discusses specific implementations of Linux and Linux distributions CentOS and Ubuntu with respect to this operating system and the operating system in a cloud environment (AWS).

Microservices is an approach to software development architecture that is characterized by the division of an application into multiple small independent services (microservices). Each microservice can be developed, deployed, extended, and maintained individually and has its own unique functionality. This section provides an overview of the elements that make up these microservices.

As mentioned in “Deploying and Operating Microservices – Docker and Kubernetes,” Kubernetes, also abbreviated as Kubanetis/Kubanetes/Kubenetis or K8s, is a technology platform and open source container orchestration system for deploying, scaling, and managing the applications we just discussed. It is an open source container orchestration system for deploying, scaling, and managing applications. Originally designed by Google, the system is now maintained by the Cloud Native Computing Foundation (CNCF).

CNCF’s definition of cloud native is “a design philosophy that combines resilient, manageable, and observable loosely coupled systems with robust automation to make system changes frequently and predictably with minimal effort.

In the previous article, we discussed container orchestration, which is expected to be an important part of the process, as well as the steps companies are taking to become cloud-native. However, a major challenge in implementing cloud-native will be the environmental dependence on cloud technology. For example, when using a specific cloud vendor service, the application architecture is forced to change depending on the functionality and service level of that service. This means that no matter how much cloud computing is used, it is not always possible to respond quickly to business changes.

One reason container technology is attracting attention is its freedom from such vendor dependence. Its technical elements utilize Linux Kernel functions and are not significantly different from conventional application execution environments. However, the first step toward cloud nativity lies in the fact that the cloud can be used without being occupied by a specific vendor, thanks to the standardization of container technology.

CDPs are a set of typical problems and solutions that arise when designing the architecture of cloud-based systems, organized and categorized in a way that abstracts their essence and makes them reusable. Among the patterns of CDP, “Basic Patterns,” “Availability Improvement Patterns,” “Dynamic Content Processing Patterns,” and “Static Content Processing Patterns” are described.

CDPs are a set of typical problems and solutions that arise during the architectural design of cloud-based systems, organized and categorized in a way that abstracts their essence and makes them reusable. Among the CDP patterns, “Data Upload Pattern,” “Relational Database Pattern,” and “Asynchronous Processing/Batch Processing Pattern” are described.

The CDPs are a set of typical problems and solutions that arise when designing architectures for cloud-based systems, organized and categorized so that they can be reused by abstracting the essence of the problems. Among the CDP patterns, “Operation and Maintenance Patterns” and “Network Patterns” are described.

In this article, we will discuss how to realize an on-premise system consisting of one web server and one database server on AWS.

Amazon Web Services (AWS) is a virtual system that is configured in the cloud. Naturally, the computers, storage, and network are virtual as well, and when you sign up for AWS, there are no servers, let alone a network.

In order to create a system, it is necessary to start by building a virtual network. While the basic concept of building a virtual system is the same as building a conventional physical system, there are many differences. Therefore, it is necessary to fully understand the differences first when building a virtual system or network in AWS. Therefore, this article will provide an overview of the differences between legacy physical infrastructure and the data center environment in AWS.

Amazon VPC (Virtual private Cloud) is a service that configures a virtual network in the cloud. When deploying resources such as servers, it is first necessary to create a VPC area. To run virtual machines in a VPC area, users need to create subnets in the VPC and configure several networking settings. For users, this is one of the most basic AWS services.

Because a VPC is a virtual network environment, it can be configured remotely using a web browser user interface without having to touch any hardware. However, the AWS virtual network is not a single network. However, since the construction of an AWS virtual network involves many unique settings, in this section we will actually create a VPC area and create a subnet within it, paying close attention to these points.

When an EC2 instance is placed on a subnet, one or more private IP addresses are automatically assigned. First, we will describe the allocation rules and mechanism.

Based on the IP address allocation rules described above, let’s actually place an EC2 instance on the subnet and see how it works. Here, we will discuss an example of placing an EC2 instance on the subnet “mysubnet01” described above.

AWS provides various EC2 instance types with different performance levels; EC2 instances are charged on a pay-per-use basis per hour of operation, with the higher performance instances being more expensive. For this experiment alone, choosing a low-specification, inexpensive instance type will be sufficient. However, when actually operating the system, it is necessary to select an appropriate instance type that meets the system requirements for a combination of cost and performance. Instance performance is mainly determined by the following seven items

To connect to an Ec2 instance from the Internet, a public IP address assignment and Internet gateway are required. This in itself is no different from a normal network environment, but public IP addresses in AWS are handled a little differently. Instead of truly assigning a public IP address to an instance, communication is done by converting the private IP address using NAT.

In this article, we will discuss how to assign a public IP and set up an Internet Gateway. We will then describe how the assigned IP address looks from the instance by logging in to the EC2 instance via SSH and checking the network interface settings.

To connect an EC2 instance on a VPC to the Internet, simply assigning a public IP address is not sufficient. An Internet gateway must be provided and the route table must also be changed.

The AWS firewall function has two mechanisms: one is the “security group” mechanism and the other is the “network ACL” mechanism. One is “security groups” and the other is “network ACLs. The former is set for each EC2 instance, and the latter is set for each subnet.

The reason for having mechanisms with different security levels is that it is necessary to use the two differently. Roughly speaking, network ACLs are used for security on a subnet basis, and security groups are used to control ports that need to be handled individually for each instance.

In this article, we will discuss these two firewall functions and describe how to change security groups, which is necessary when web server software such as Apache HTTP Server or nginx is installed on an EC2 instance.

For websites with back-end databases, it is a common practice to place the database server on a subnet that is not directly accessible from the Internet for security reasons. You can also set up an EC2 instance on a private subnet.

To do any work on the EC2 instance, you need to connect to it remotely using SSH or other means. However, an instance that is assigned only a private IP address cannot be reached from the Internet, so the instance cannot be directly manipulated remotely. There are two rules against this (1) Use a stepping-stone server, and (2) Configure a VPN. In this article, we describe the concrete implementation of (1) using WordPress and MySQL as an example.

Web servers used for business are usually operated using URLs with their own domains, such as “http://www.example.co.jp/”. In this case, a “public static IP address” and a “DNS server” are required.

In an on-premises communications environment, it is common practice to use BIND when DNS is required, so it is normal to install BIND on an EC2 instance in AWS. Of course, you can build a mechanism for name resolution in this way as well, but AWS usually uses Route53, which is a managed service (a service that lets AWS take care of operations and management) that provides DNS services.

Terraform is an open source service developed by HashiCorp and is a type of IaC (Infrastructure as Code) tool that enables efficient construction of development environments. Terraform is also characterized by its ability to declare infrastructure configuration management using Terraform code.

Terraform allows infrastructure to be described in declarative code, and code versioning makes it easy to track changes to the system and automatically resolve dependencies, greatly reducing human error that is often associated with manual work. The code versioning system is a key element of the system’s infrastructure.

Terraform will be an open source tool for automating cloud infrastructure and other IT resources Terraform allows you to create, modify, and version resources in a programmable way.

Kubernetes will be an open source orchestration tool for managing applications running on Docker and other container runtimes Kubernetes allows you to deploy, scale, and manage applications across multiple nodes and failover with ease.

Microservices need to be packaged as self-contained artifacts that can be replicated and deployed with a single command. The service also needs to have a short startup time and be lightweight so that it can be up and running within seconds. Containers can be deployed quickly due to their inherent implementation compared to setting up a bare-metal machine with a host OS and the necessary dependencies. In addition, packaging microservices within containers allows for a faster and more automated transition from development to production. Therefore, it is recommended that microservices be packaged in containers.

コメント

タイトルとURLをコピーしました