About Encryption and security technologies and data compression techniques
The rapid development of digital technology in recent years has promoted the distribution and sharing of information, continuously connecting the structure and functions of society. The importance of encryption and security technologies is increasing.
Cryptography is a technique used to protect data by converting it to a non-readable format, making it inaccessible except to authenticated recipients, and is used to ensure confidentiality, integrity, and availability.
Conversion to a non-readable format is done using a private or public key. A private key scheme uses the same key to encrypt and decrypt a message, while a public key scheme uses the public key for encryption and the private key for decryption. In this method, even if the public key is made public, the ciphertext cannot be decrypted unless the private key is known.
Encrypted data can also be securely stored using a technique called a hash function. A hash function is the process of converting data into a random string of fixed length, which always generates the same value for the same data. This value is used to generate electronic message authentication codes (MACs) and digital signatures.
Security technologies are used to secure networks and systems, and securing a network or system means ensuring confidentiality, integrity, availability, and authenticity. Security technologies include the following
- Firewalls: Separate the network from the Internet to prevent unauthorized access to the network.
- Anti-virus: This would be software that detects and removes viruses, malware, and other malicious programs.
- Spam filter: Software that automatically detects and removes unwanted e-mail.
- Intrusion detection system (IDS): A system that detects attempts to intrude into a network and issues warnings.
- Virtual Private Network (VPN): A network that encrypts communications over the Internet to provide a secure connection.
- Secure Sockets Layer (SSL): An encryption protocol used to provide secure communications on Web sites.
Data compression technology is a technique used to compress data to make it smaller to save storage space and speed up data transfer rates. There are two types of data compression: lossy compression and non-reversible compression.
- Lossy compression: Lossy compression refers to compression in which the original data cannot be recovered. This method is mainly used for multimedia files such as audio, video, and images. Typical algorithms include JPEG (for still images), MPEG (for video), and MP3 (for music). Because of the high compression ratios of these algorithms, data quality can be affected.
- Lossless compression: Lossless compression refers to compression in which the original data can be restored. This method is mainly used for data files such as text, documents, spreadsheets, databases, applications, etc. Typical lossless compression algorithms include ZIP, GZIP, LZW, and DEFLATE. The compression ratio with these algorithms is lower than lossy compression, but the data quality is preserved.
Data compression techniques help save storage space and increase data transfer speeds, but the higher the compression ratio, the more time-consuming the compression and decompression process can be. In addition, lossy compression requires consideration of the balance between compression ratio and quality, since the quality of the data after compression may be lower than that of the original data.
This blog discusses these encryption and security techniques and data compression techniques in the following sections.
Implementation
Data encryption will be a technology to protect data from unauthorized access and information leakage by converting data in a non-reversible manner. Through encryption, data depends on a specific key and is converted into a form that cannot be understood by those who do not know the key, so that only those with the legitimate key can decrypt the data and restore it to its original state. This section describes various algorithms and implementation forms of this encryption technique.
Data compression is the process of reducing the size of data in order to represent information more efficiently. The main purpose of data compression is to make data smaller, thereby saving storage space and improving data transfer efficiency. This section describes various algorithms and their implementation in python for data compression.
Access Control technology is a security technique for controlling access to information systems and physical locations so that only authorized users can access authorized resources, protecting the confidentiality, integrity, and availability of data, and enforcing security It is a widely used technology to protect the confidentiality, integrity, and availability of data and to enforce security policies. This section describes various algorithms and implementation examples for this access control technique.
- Distributed ledger technology and mesh networks used in distributed IOT systems
Decentralised IOT systems (Decentralised Internet of Things) are systems with a decentralised architecture as opposed to traditional centralised IOT systems, where IOT devices and sensors exchange and process data directly on the network, without depending on a central server. . Distributed Ledger Technology (DLT: Distributed Ledger Technology) is a technology in which the digital ledger is distributed and stored on multiple computers (nodes) on the network, with each node holding a copy of the ledger and synchronised across all nodes. This technology is used to enhance data transparency, security and tamper-resistance.
- The Solid Project and the NFT
The Solid project, proposed by Sir Tim Berners-Lee, founder of the World Wide Web, aims to build a decentralised web. and aims to enable users to own, manage and control their own data, the main aim of the project will be to address current problems with the centralisation of the web and the collection of personal data.
Directed Acyclic Graph (DAG) is a graph data algorithm that appears in various situations such as automatic management of various tasks and compilers. In this article, I would like to discuss DAGs.
Rust is a programming language developed by Mozilla Research for systems programming, designed with an emphasis on high performance, memory safety, parallelism, and multi-threaded processing. It is also a language focused on bug prevention through strong static type checking at compile time.
This section provides an overview of Rust, its basic syntax, various applications, and concrete implementations.
Typically, IOT devices are small devices with sensors and actuators, and use wireless communication to collect sensor data and control actuators. Various communication protocols and technologies are used for wireless IoT control. This section describes examples of IoT implementations using this wireless technology in various languages.
Technical Topics
The Internet is a global network of interconnected computer networks around the world, and Web technologies will be technologies for transmitting and viewing information and content on that Internet. In other words, the Internet is the foundation of information and communication, and web technology provides specific tools and methods for transmitting, sharing and browsing information on it. In contrast to Web technologies as information dissemination/sharing/browsing, Web 3.0 focuses on Semantic Web technologies and improved data semantics, whereas Web 3 represents a new architecture and philosophy of the distributed Web, emphasising distributed ledger technology, data ownership and privacy which can be described as. Both are responsible for different aspects of the future of the web and overlap in some respects, but are different concepts.
- Technology and History of Cryptographic Information Security Reading Notes
- Data compression algorithms(1) Loss-free compression
There are two main compression technologies: lossless compression and lossy compression. Lossless compression can perfectly reproduce the original data even when compressed, and therefore, only one of these is needed.
The basic principle of lossless compression is to find repetitive parts of data and omit them. For example, suppose we have the following data.
Lossy compression” is a compression technique that literally allows for loss. In what case is it acceptable for mistakes to exist when decompressed? For example, it is applied to information such as images and audio information that seem the same to humans, even if there are minor differences in the information. Also, the amount of information in this type of information is overwhelmingly larger than that of the aforementioned character string information, and it can be said that there is a need for data compression.
One of the methods used to achieve such compression is the “exclusion trick,” which removes a portion of the data. This is done, for example, by removing pixels, the unit of image information. The simplest method is to mechanically remove rows and columns of pixels, row by row, and using this method, the amount of data can be reduced to 1/4 of the total.
When considering data transmission and storage, the most important thing is to store/transmit the data “completely and accurately. On the other hand, hardware such as memory and communication always has noise and malfunctions, and 100% operation is not guaranteed. Therefore, it is necessary to ensure “complete accuracy” through software innovations, and “error correction technology” is one of the technologies to achieve this.
When considering communication transmission, the simplest way to improve reliability is to repeat the electrical components several times, take a majority vote on the results of each trial, and select the one with the most results as the correct answer. However, there are several problems with this method. One is that this method works only when the frequency of errors is random, and if there is some tendency for errors to occur, the majority vote will be influenced by that tendency, and if the error rate is high, the number of trials must be increased.
The simplest cryptographic method begins with “having a secret to share”. Suppose that when you send some data only to a certain person, you have a secret number that you share only with that person. (e.g., 322) Then, when you send 8 as data, you send that plus the number to the other person (322+8=400), and the other person can subtract the shared number from the number you sent.
To make this even more practical, you can send a number whose shared number is not easily known, for example, a longer number (e.g., a 3-digit number). (For example, there are 999 possible secret numbers for a 3-digit number, but a computer can easily try all combinations at this level.) In general cryptographic algorithms, the key is the number of digits calculated from 30% of the data sent, so for a 128-bit cipher, 38 digits is a trillion times a trillion times a trillion times a trillion. This is a large number. Since this would take a billion years to calculate with current computers, it can be judged to be a secure method.
The “signature” of a digital signature is defined as something that can be read but cannot be copied by anyone. This is the complete opposite of “digital,” which can be copied by anyone. Digital signatures provide a solution to this paradox.
Digital signatures are used not to sign something you send to someone else, like a normal letter, but to check a computer when someone sends you something signed by someone else.
In this article, we discuss Adversarial Examples and security.
With the spread of smartphones, it has become easier to obtain our location information, and various location-based social network services are being offered. The machinery for secondary use of such location information analysis and utilization is spreading to various fields, and is being used for social services such as traffic information provision and urban design, as well as for commercial businesses such as commercial area analysis.
On the other hand, there is a risk that location information may reveal privacy-related information such as personal habits, interests, behaviors, and social circles. In order to allow third parties to utilize location information in a safe manner, data providers are required to process the data, known as anonymization, to protect privacy.
The current safety standards for anonymization only require the format of anonymized data, which does not guarantee sufficient safety. In this paper, we point out the problems in applying the widely used anonymization techniques to location information, and describe a statistical method based on a Markov model for evaluating the safety of anonymization methods.
The AWS firewall function has two mechanisms: one is the “security group” mechanism and the other is the “network ACL” mechanism. One is “security groups” and the other is “network ACLs. The former is set for each EC2 instance, and the latter is set for each subnet.
The reason for having mechanisms with different security levels is that it is necessary to use the two differently. Roughly speaking, network ACLs are used for security on a subnet basis, and security groups are used to control ports that need to be handled individually for each instance.
In this article, we will discuss these two firewall functions and describe how to change security groups, which is necessary when web server software such as Apache HTTP Server or nginx is installed on an EC2 instance.
Microservices need to be deployed in isolation and monitored for usage. Monitoring current workloads and processing times can also help determine when to scale them up and when to scale them down. Another important aspect of a microservice-based architecture will be security. One way to secure microservices would be to ensure that each service has its own authentication and authorization module. But this approach quickly becomes problematic. Because each microservice is deployed in isolation, it becomes incredibly difficult to agree on common criteria for authenticating users. Also, in this case, ownership of users and their roles would be distributed among the services. This chapter addresses these issues and describes solutions for securing, monitoring, and extending microservice-based applications.
Single sign-on (SSO) is a mechanism that allows users to seamlessly access multiple different systems and applications by authenticating to them once.
Although SAML authentication and OAuth are the same in terms of achieving single sign-on, the key to understanding the difference between the two is the difference between “authentication” and “authorization. SAML authentication involves both authentication and authorization, while OAuth, in principle, does not involve authentication but only authorization. In other words, OAuth does not perform authentication when using APIs, which is a major difference from SAML authentication.
Virtual currencies such as Bitcoin and the blockchain technology that supports them are extremely novel. They have the potential to significantly change the basic structure of society.
Still sometimes seen as dubious due to the collapse of bitcoin exchange operators, etc.
Bitcoin and blockchain technology, however, have an impact not only on the financial sector, but also on various other industries. What kind of business is about to be created, what kind of technology makes it possible, and how is the Japanese legal system responding?
In this book, experts in business and technology development in Bitcoin and blockchain technology share their know-how and knowledge gained through their practical experience with the aim of developing the industry, not only with financial experts but also with those who are involved in new business development and business planning.
This book shares the know-how and knowledge gained by experts in the development of Bitcoin and blockchain technology, not only with financial experts, but also with a wide range of business people involved in new business development and corporate planning.
コメント