Skip to main content
Version: V1.0

Basic environment overview

Description of micro service system

Microservice definition

Microservice architecture is a method to generate server Applications into a group of small services, mainly for the back end, although this method can also be used for the front end. Each service runs in its own process and uses http / HTTPS, Protocols such as websocket or AMQP communicate with other processes. Each microservice implements specific end-to-end domain or business functions within a specific context boundary. Each microservice must be developed independently and can be deployed independently. Through microservices, large-scale applications can be decomposed into multiple independent components, each of which has its own responsibility field. When processing a user request, microservice based applications may call many internal microservices to jointly generate their response.

The size of microservices should not be the focus. The focus should be on creating loosely coupled services so that they can be developed, deployed and scaled independently. When designing microservices, as long as there are no too many direct dependencies with other microservices, we should try to make them as small as possible. Relative to the size of services, it is more important to be highly cohesive and independent of other services.

It provides long-term flexibility for the service architecture, allows the creation of applications based on multiple independent deployable services, and provides better maintainability in complex, large-scale and highly scalable systems.

Another big advantage is that it can independently carry out horizontal Extension. Specific microservices can be Extension horizontally without Extension of the entire Applications as a unit. In this way, functional areas requiring more CPU processing power, network, storage and other resources can be easily scaled separately.

The microservice architecture will be more flexible in deployment. In the traditional deployment mode, the entire Applications needs to be cloned for scaling. In the microservice mode, the functions will be separated into smaller services, and each service can be scaled independently. The microservice approach allows flexible changes and fast iterations for each microservice.

Why microservices?

Traditional large single Applications require a large amount of memory and other resources on a single server when they are deployed and run. A huge single application must replicate the entire Applications on multiple servers to achieve horizontal Extension, so its Extension capability is extremely poor; In addition, these Applications are often more complex, and the functional components are tightly coupled, making maintenance and updating more difficult

There are many privately deployed products and different programming languages. In this case, if the traditional MVC architecture wants to upgrade a functional component of the application alone, it will have the trouble of "pulling one hair and moving the whole body". In the mode of microservice, only the separated functional components need to be upgraded separately.

Introduction to basic environment

tip

The enterprise deployment will install but not limited to the following services in the server provided by you as the basic environment support

  • Docker

Docker It is an open source software and an open platform for developing, shipping and running applications. Docker allows users to separate applications in infrastructure to form smaller particles (containers), so as to improve the speed of software delivery. Dockers is a dependency tool that can package Applications and their virtual containers, and can run on any Linux server. It helps to achieve flexibility and portability. Applications can run anywhere, whether it is a public cloud, a private cloud or a single machine. The commander or mage module is deployed and provides services in the form of docker image.

  • Kubernetes

Kubernetes Also referred to as k8s, it is an open source system mainly used to automatically deploy, Extension and manage "containerized Applications". It is designed to provide "a platform for automatic deployment, Extension, and running Applications containers across host clusters". It supports a series of container tools, including docker. It is an open source platform that can automatically implement container operation. It can help users avoid many manual deployment and Extension operations of applying the containerization process. That is, you can bring together multiple groups of hosts running Linux containers, and kubernetes can help you manage these clusters easily and efficiently. The goal of k8s is to deploy container applications simply and efficiently, which can provide application deployment, planning, update and maintenance mechanisms.

  • harbor

Harbor It is an enterprise registry server used to store and distribute docker images. It Extension the open source docker distribution by adding some necessary enterprise features, such as security, identification and management. As an enterprise private registry server, harbor provides better performance and security. The docker image is stored on the harbor server.

  • Istio

Istio Manage traffic between services, implement access policies, and aggregate telemetry data without changing application codes. Istio layers existing distributed applications in a transparent way, thus simplifying the deployment complexity. Network operators can manage the network of all their services in a consistent way without increasing developer expenses. Implement best practices such as Canary release and gain insight into applications to determine where to focus on improving performance

  • Minio

MinIO It is a high-performance object storage, which is compatible with Amazon S3 cloud storage service. Since its inception, Minio's software definition suite has operated seamlessly in public, private and edge clouds, making it a leader in hybrid cloud and multi cloud object storage. Minio claims to be the world's fastest object storage server. On standard hardware, the read and write speeds of object storage are 183 gib / s and 171 gib / s respectively. Object storage can be used as the main storage layer to process spark, presto, tensorflow and H2O AI and other complex workloads and become a substitute for Hadoop HDFS

As MysqlRedisRabbitmqNginx Will be integrated and deployed in the server

Base environment version

note

The version number is the integrated version of the module package, which may change slightly in the actual deployment

VersionServiceAutomation CommanderIntelligent Document ProcessingConversational AI Platform
v1.19.16kubernetes
v1.19.15docker
v1.7.5istio
5.7.36mysql
2020-01-16minio
5.0.10redis
3.0rabbitmq
2.0.2harbor
7.16.2elasticsearch
1.20.2nginx
3.3.8etcd
info

It is worth noting that if you can provide the corresponding version of middleware services, our application also supports connection

Deployment mode description

info

The high availability deployment scheme should be implemented in the solution stage

Multi node deployment and high availability deployment provide monitoring functions. High availability deployment requires the cooperation of customers to configure alarms

The availability is limited to the server-side products, excluding the availability of Process and clients

In addition to high availability deployment, you can refer toInstallation documentdeploy

POC deployment

This scheme is applicable to one-time POC scenarios without performance commitment

Customers need to provide cloud servers or virtual machines

Implementable personnel include : Presales, partners, customers

Stand alone deployment

The scheme is applicable to small and medium-sized businesses, for example, the number of agent is less than 20

Customers need to provide cloud servers or virtual machines

Implementable personnel include : Presales, partners, customers

Multi node deployment

The scheme is applicable to the scene with certain requirements for data security, and the number of agent is less than 100.

Customers need to provide cloud servers or virtual machines and LoadBalance address

Extension deployment without high availability commitment. If customers have high availability requirements, they need to provide middleware services such as MySQL and redis.

Implementable personnel include : Pre sales, partners, customers, original factory

High availability deployment

The scheme is applicable to core business or the number of agent is greater than 100, and can be customized and deployed according to the customer's environment.

Promise high availability (no higher than the availability of the customer's environment), which is determined according to the customer's actual environment

Customers need to provide cloud servers or virtual machines and LoadBalance address

The implementation is supported by the original factory engineer and charged separately