diff --git a/README.md b/README.md index 64eb7522..82933c20 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Multi Language Version +# Multiple Language Versions * [English](en/) * [French](fr/) * [Spanish](es/) diff --git a/en/10.0.md b/en/10.0.md index 366edb84..17e5911c 100644 --- a/en/10.0.md +++ b/en/10.0.md @@ -21,4 +21,4 @@ In the first section, we'll describe how to detect and set the correct locale in - [Directory](preface.md) - Previous Chapter: [Chapter 9 Summary](09.7.md) -- Next section: [Time zone](10.1.md) +- Next section: [Setting the default region](10.1.md) diff --git a/en/10.1.md b/en/10.1.md index f1704799..3c64667a 100644 --- a/en/10.1.md +++ b/en/10.1.md @@ -1,8 +1,8 @@ -# 10.1 Time zones +# 10.1 Setting the default region ## Finding out the locale -A locale is a set of descriptors for a particular geographical region, and can include specific language habits, text formatting, cultural idioms and a multitude of other settings. A locale's name is usually composed of three parts. First (and mandatory) is the locale's language abbreviation, such as "en" for English or "zh" for Chinese. The second part is an optional country specifier, and follows the first with an underscore. This specifier allows web applications to distinguish between different countries which speak the same language, such as "en_US" for U.S. English, and "en_UK" for British English. The last part is another optional specifier, and is added to the locale with a period. It specifies which character set to use, for instance "zh_CN.gb2312" specifies the gb2312 character set for Chinese. +A locale is a set of descriptors for a particular geographical region, and can include specific language habits, text formatting, cultural idioms and a multitude of other settings. A locale's name is usually composed of three parts. First (and mandatory) is the locale's language abbreviation, such as "en" for English or "zh" for Chinese. The second part is an optional country specifier, and follows the first with an underscore. This specifier allows web applications to distinguish between different countries which speak the same language, such as "en_US" for U.S. English, and "en_GB" for British English. The last part is another optional specifier, and is added to the locale with a period. It specifies which character set to use, for instance "zh_CN.gb2312" specifies the gb2312 character set for Chinese. Go defaults to the "UTF-8" encoding set, so i18n in Go applications do not need to consider the last parameter. Thus, in our examples, we'll only use the first two parts of locale descriptions as our standard i18n locale names. diff --git a/en/10.2.md b/en/10.2.md index c0a9349b..b9b7714b 100644 --- a/en/10.2.md +++ b/en/10.2.md @@ -136,5 +136,5 @@ This section described how to use and store local resources. We learned that we ## Links - [Directory](preface.md) -- Previous section: [Time zone](10.1.md) +- Previous section: [Setting the default region](10.1.md) - Next section: [[International sites](10.3.md) diff --git a/en/12.0.md b/en/12.0.md index ca0b2827..67cccc87 100644 --- a/en/12.0.md +++ b/en/12.0.md @@ -1,8 +1,6 @@ # 12 Deployment and maintenance -So far, we have already described how to develop programs, debugger, and test procedures, as is often said: the development of the last 10% takes 90% of the time, so this chapter we will emphasize this last part of the 10% to truly become the most trusted and used by outstanding application, you need to consider some of the details of the above-mentioned 10% refers to these small details. - -In this chapter we will be four sections to introduce these small details of the deal, the first section describes how to program production services recorded on the log, how to logging, an error occurs a second section describes how to deal with our program, how to ensure as far as possible less impact to the user's access to, and the third section describes how to deploy Go standalone program, due to the current Go programs that can not be written as C daemon, then the process of how we manage the program running in the background so it? The fourth section describes the application of data backup and recovery, try to ensure that applications crash can maintain data integrity. +So far, we've covered the basics of developing, debugging and testing web applications in Go. As is often said, however: the last 10% of development takes 90% of the time. In this chapter, we will be emphasizing this last 10% of application development in order to truly craft reliable and high quality web applications. In the first section, we will examine how production services generate logs, and the process of logging itself. The second section will describe dealing with runtime errors, and how to manage them when they occur so that the impact on end users is minimized. In the third section, we tackle the subject of deploying standalone Go programs, which can be tricky at first. As you might know, Go programs cannot be written with daemons like you would with a language such as C. We'll discuss how background processes are typically managed in Go. Finally, our fourth and last section will address the process of backing up and recovering application data in Go. We'll take a look at some techniques for ensuring that in the event of a crash, we will be able to maintain the integrity of our data. ## Links diff --git a/en/12.1.md b/en/12.1.md index 1a109822..93db9b7d 100644 --- a/en/12.1.md +++ b/en/12.1.md @@ -1,15 +1,15 @@ # 12.1 Logs -We look forward to developing Web applications able to run the entire program of various events occurring during record each, Go language provides a simple log package, we use the package can easily achieve logging features, these log are based on the combined package fmt print function like panic for general printing, throwing error handling. Go current standard package only contains a simple function, if we want our application log to a file, and then to combine log achieve a lot of complex functions( written a Java or C++, the reader should have used log4j and log4cpp like logging tool ), you can use third-party developers a logging system, `https://github.com/cihub/seelog`, which implements a very powerful logging functions. Next, we describe how the system through the log to achieve our application log function. +We want to build web applications that can keep track of events which have occurred throughout execution, combining them all into one place for easy access later on, when we inevitably need to perform debugging or optimization tasks. Go provides a simple `log` package which we can use to help us implement simple logging functionality. Logs can be printed using Go's `fmt` package, called inside error handling functions for general error logging. Go's standard package only contains basic functionality for logging, however. There are many third party logging tools that we can use to supplement it if your needs are more sophisticated (tools similar to log4j and log4cpp, if you've ever had to deal with logging in Java or C++). A popular and fully featured, open-source logging tool in Go is the [seelog](https://github.com/cihub/seelog) logging framework. Let's take a look at how we can use `seelog` to perform logging in our Go applications. -## Seelog Introduction +## Introduction to seelog -seelog with Go language of a logging system, it provides some simple function to implement complex log distribution, filtering, and formatting. Has the following main features: +Seelog is a logging framework for Go that provides some simple functionality for implementing logging tasks such as filtering and formatting. Its main features are as follows: -- XML dynamic configuration, you can not recompile the program and dynamic load configuration information +- Dynamic configuration via XML; you can load configuration parameters dynamically without recompiling your program - Supports hot updates, the ability to dynamically change the configuration without the need to restart the application -- Support multi- output stream that can simultaneously output the log to multiple streams, such as a file stream, network flow, etc. -- Support for different log output +- Supports multi-output streams that can simultaneously pipe log output to multiple streams, such as a file stream, network flow, etc. +- Support for different log outputs - Command line output - File Output @@ -17,13 +17,13 @@ seelog with Go language of a logging system, it provides some simple function to - Support log rotate - SMTP Mail -The above is only a partial list of features, seelog is a particularly powerful log processing systems, see the detailed contents of the official wiki. Next, I will briefly describe how to use it in your project: +The above is only a partial list of seelog's features. To fully take advantage of all of seelog's functionality, have a look at its [official wiki](https://github.com/cihub/seelog/wiki) which thoroughly documents what you can do with it. Let's see how we'd use seelog in our projects: -First install seelog +First install seelog: go get -u github.com/cihub/seelog -Then we look at a simple example: +Then let's write a simple example: package main @@ -34,12 +34,11 @@ Then we look at a simple example: log.Info("Hello from Seelog!") } +Compile and run the program. If you see a `Hello from seelog` in your application log, seelog has been successfully installed and is running operating normally. -When compiled and run if there is a `Hello from seelog`, description seelog logging system has been successfully installed and can be a normal operation. +## Custom log processing with seelog -## Based seelog custom log processing - -seelog support custom log processing, the following is based on its custom log processing part of the package: +Seelog supports custom log processing. The following code snippet is based on the its custom log processing part of its package: package logs @@ -97,31 +96,33 @@ seelog support custom log processing, the following is based on its custom log p } -Above the main achievement of the three functions, +The above implements the three main functions: - `DisableLog` -Logger initialize global variables as seelog disabled state, mainly in order to prevent the Logger was repeatedly initialized +Initializes a global variable `Logger` with seelog disabled, mainly in order to prevent the logger from being repeatedly initialized + - `LoadAppConfig` -Depending on the configuration file to initialize seelog configuration information, where we read the configuration file by string set up, of course, you can read the XML file. Inside the configuration is as follows: +Initializes the configuration settings of seelog according to a configuration file. In our example we are reading the configuration from an in-memory string, but of course, you can read it from an XML file also. Inside the configuration, we set up the following parameters: - Seelog -minlevel parameter is optional, if configured, is higher than or equal to the level of the log will be recorded, empathy maxlevel. +The `minlevel` parameter is optional. If configured, logging levels which are greater than or equal to the specified level will be recorded. The optional `maxlevel` parameter is similarly used to configure the maximum logging level desired. + - Outputs -Output destination, where data is divided into two, one record to the log rotate file inside. Another set up a filter, if the error level is critical, it will send alarm messages. +Configures the output destination. In our particular case, we channel our logging data into two output destinations. The first is a rolling log file where we continuously save the most recent window of logging data. The second destination is a filtered log which records only critical level errors. We additionally configure it to alert us via email when these types of errors occur. - Formats -Log format defines various +Defines the various logging formats. You can use custom formatting, or predefined formatting -a full list of predefined formats can be found on seelog's [wiki](https://github.com/cihub/seelog/wiki/Format-reference) - `UseLogger` -Set the current logger for the corresponding log processing +Set the current logger as our log processor -Above we defined a custom log processing package, the following example is to use: +Above, we've defined and configured a custom log processing package. The following code demonstrates how we'd use it: package main @@ -139,36 +140,38 @@ Above we defined a custom log processing package, the following example is to us logs.Logger.Critical("Server err:%v", err) } -## An error occurred mail +## Email notifications -The above example explains how to set up email, we adopted the following smtp configured to send e-mail: +The above example explains how to set up email notifications with `seelog`. As you can see, we used the following `smtp` configuration: -The format of the message through criticalemail configuration, and then send messages through other configuration server configuration, configured to receive mail through the recipient user, if there are multiple users can add one line. +We set the format of our alert messages through the `criticalemail` configuration, providing our mail server parameters to be able to receive them. We can also configure our notifier to send out alerts to additional users using the `recipient` configuration. It's a simple matter of adding one line for each additional recipient. -To test this code is working correctly, you can add code similar to the following one false news. But remember that after should delete it, otherwise you will receive on-line after a lot of junk e-mail. +To test whether or not this code is working properly, you can add a fake critical message to your application like so: logs.Logger.Critical("test Critical message") -Now, as long as our application online record a Critical information that you will receive an e-mail Email, so that once the online system problems, you can immediately informed by e-mail, you can timely processing. +Don't forget to delete it once you're done testing, or when your application goes live, your inbox may be flooded with email notifications. -## Using the Application log +Now, whenever our application logs a critical message while online, you and your specified recipients will receive a notification email. You and your team can then process and remedy the situation in a timely manner. -For the application logs, each person's application scenarios may vary, and some people use to do data analysis application logs, some people use the application logs do performance analysis, analysis of user behavior that some people do, and some is pure record, to facilitate the application of a problem when the auxiliary find the problem. +## Using application logs -As an example, we need to track a user attempts to log in the system operation. This will successful and unsuccessful attempts to record. Record the successful use of "Info" log level, rather than the successful use of "warn" level. If you want to find all unsuccessful landing, we can use the linux command tools like grep, as follows: +When it comes to logs, each application's use-case may vary. For example, some people use logs for data analysis purposes, others for performance optimization. Some logs are used to analyze user behavior and how people interact with your website. Of course, there are logs which are simply used to record application events as auxiliary data for finding problems. + +As an example, let's say we need to track user attempts at logging into our system. This involves recording both successful and unsuccessful login attempts into our log. We'd typically use the "Info" log level to record these types of events, rather than something more serious like "warn". If you're using a linux-type system, you can conveniently view all unsuccessful login attempts from the log using the `grep` command like so: # cat /data/logs/roll.log | grep "failed login" 2012-12-11 11:12:00 WARN : failed login attempt from 11.22.33.44 username password -In this way we can easily find the appropriate information, this will help us to do something for the application log statistics and analysis. In addition, we also need to consider the size of the log, for a high- traffic Web applications, log growth is quite terrible, so we seelog configuration files which set up logrotate, so that we can ensure that the log file will not be changing large and lead us not enough disk space can cause problems. +This way, we can easily find the appropriate information in our application log, which can help us to perform statistical analysis if needed. In addition, we also need to consider the size of logs generated by high-traffic web applications. These logs can sometimes grow unpredictably. To resolve this issue, we can set `seelog` up with the logrotate configuration to ensure that single log files do not consume excessive disk space. ## Summary -On the face of seelog system and how it can be customized based on the learning log system, and now we can easily construct a suitable demand powerful log processing the system. Data analysis log processing system provides a reliable data source, such as through the log analysis, we can further optimize the system, or application problems arise easy to find location problem, another seelog also provides a log grading function, through the configuration for min-level, we can easily set the output test or release message level. +In this section, we've learned the basics of `seelog` and how to build a custom logging system with it. We saw that we can easily configure `seelog` into as powerful a log processing system as we need, using it to supply us with reliable sources of data for analysis. Through log analysis, we can optimize our system and easily locate the sources of problems when they arise. In addition, `seelog` ships with various default log levels. We can use the `minlevel` configuration in conjunction with a log level to easily set up tests or send automated notification messages. ## Links diff --git a/en/12.2.md b/en/12.2.md index 54408a0f..1c408ae9 100644 --- a/en/12.2.md +++ b/en/12.2.md @@ -1,37 +1,38 @@ # 12.2 Errors and crashes -Our on-line soon after the Web application, then the probability of various error has occurred, Web applications may be a variety of daily operation errors, specifically as follows: +Once our web applications go live, it's likely that there will be some unforeseen errors. A few example of common errors that may occur in the course of your application's daily operations, are listed below: -- Database error: refers to access the database server or data -related errors. For example, the following may occur some database errors. +- Database Errors: errors related to accessing the database server or stored data. The following are some database errors which you may encounter: -- Connection Error: This type of error may be disconnected from the network database server, user name, password is incorrect, or the database does not exist. -- Query Error: illegal use of SQL cause an error like this SQL error if the program through rigorous testing should be avoided. -- Data Error: database constraint violation, such as a unique field to insert a duplicate primary key values ​​will complain, but if your application on-line through a rigorous testing before also avoid such problems. -- Application Runtime Error: This type of error range is very wide, covering almost all appear in the code error. Possible application errors are as follows: +- Connection Errors: indicates that a connection to the network database server could not be established, a supplied user name or password is incorrect, or that the database does not exist. +- Query Errors: the illegal or incorrect use of an SQL query can raise an error such as this. These types of errors can be avoided through rigorous testing. +- Data Errors: database constraint violation such as attempting to insert a field with a duplicate primary key. These types of errors can also be avoided through rigorous testing before deploying your application into a production environment. +- Application Runtime Errors: These types of errors vary greatly, covering almost all error codes which may appear during runtime. Possible application errors are as follows: -- File system and permissions: application to read the file does not exist, or do not have permission to read the file, or written to a file is not allowed to write, which will result in an error. If the application reads the file format is not correct will be given, such as configuration files should be INI configuration format, and set into JSON format an error. -- Third-party applications: If our application interfaces coupled with other third-party programs, such as articles published after the application automatically calls the hair micro-blogging interface, so the interface must be running to complete the function we publish an article. +- File system and permission errors: when the application attempts to read a file which does not exist or does not have permission to read, or when it attempts to write to a file which it is not allowed to write to, errors of this category will occur. A file system error will also occur if an application reads a file with an unexpected format, for instance a configuration file that should be in the INI format but is instead structured as JSON. +- Third-party application errors: These errors occur in applications which interface with other third-party applications or services. For instance, if an application publishes tweets after making calls to Twitter's API, it's obvious that Twitter's services must be up and running in order for our application to complete its task. We must also ensure that we supply these third-party interfaces with the appropriate parameters in our calls, or else they will also return errors. -- HTTP errors: These errors are based on the user's request errors, the most common is the 404 error. While there may be many different errors, but also one of the more common error 401 Unauthorized error( authentication is required to access the resource ), 403 Forbidden error( do not allow users to access resources ) and 503 error( internal program error ). -- Operating system error: These errors are due to the application on the operating system error caused mainly operating system resources are allocated over, leading to crashes, as well as the operating system disk is full, making it impossible to write, so it will cause a lot of errors. -- Network error: error refers to two aspects, one is the application when the user requests a network disconnection, thus disrupt the connection, this error does not cause the application to crash, but it will affect the effect of user access ; another on the one hand is an application to read the data on other networks, other network disconnect can cause read failures, such an application needs to do effective testing, to avoid such problems arise in case the program crashes. +- HTTP errors: These errors vary greatly, and are based on user requests. The most common is the 404 Not Found error, which arises when users attempt to access non-existent resources in your application. Another common HTTP error is the 401 Unauthorized error (authentication is required to access the requested resource), 403 Forbidden error (users are altogether refused access to this resource) and 503 Service Unavailable errors (indicative of an internal program error). +- Operating system errors: These sorts of errors occur at the operating system layer and can happen when operating system resources are over-allocated, leading to crashes and system instability. Another common occurrence at this level is when the operating system disk gets filled to capacity, making it impossible to write to. This naturally produces in many errors. +- Network errors: network errors typically come in two flavors: one is when users issue requests to the application and the network disconnects, thus disrupting its processing and response phase. These errors do not cause the application to crash, but can affect user access to the website; the other is when applications attempts to read data from disconnected networks, causing read failures. Judicious testing is particularly important when making network calls to avoid such problems, which can cause your application to crash. -## Error processing of the target +## Error handling goals -Error handling in the realization, we must be clear error handling is what you want to achieve, error handling system should accomplish the following tasks: -- Notification access user errors: the matter is that a system error occurs, or user error, the user should be aware Web applications is a problem, the user 's current request could not be completed correctly. For example, a request for user error, we show a unified error page(404.html). When a system error occurs, we passed a custom error page display system is temporarily unavailable kind of error page(error.html). -- Record error: system error, generally what we call the function to return the case err is not nil, you can use the front section describes the system log to a log file. If it is some fatal error, the system administrator is notified via e-mail. General 404 such mistakes do not need to send messages to the log system only needs to record. -- Rollback the current request: If a user requests occurred during a server error, then the need to roll back the operation has been completed. Let's look at an example: a system will save the user submitted the form to the database, and to submit this data to a third-party server, but third-party server hung up, which resulted in an error, then the previously stored form data to a database should delete( void should be informed ), and should inform the user of the system error. -- Ensure that the existing program can be run to serve: We know that no one can guarantee that the program will be able to have a normal running, if one day the program crashes, then we need to log the error, and then immediately let the program up and running again, let the program continue to provide services, and then notify the system administrator through the logs to identify problems. +Before implementing error handling, we must be clear about what goals we are trying to achieve. In general, error handling systems should accomplish the following: + +- User error notifications: when system or user errors occur, causing current user requests to fail to complete, affected users should be notified of the problem. For example, for errors cause by user requests, we show a unified error page (404.html). When a system error occurs, we use a custom error page to provide feedback for users as to what happened -for instance, that the system is temporarily unavailable (error.html). +- Log errors: when system errors occur (in general, when functions return non-nil error variables), a logging system such as the one described earlier should be used to record the event into a log file file. If it is a fatal error, the system administrator should also be notified via e-mail. In general however, most 404 errors do not warrant the sending of email notifications; recording the event into a log for later scrutiny is often adequate. +- Roll back the current request operation: If a user request causes a server error, then we need to be able to roll back the current operation. Let's look at an example: a system saves a user-submitted form to its database, then submits this data to a third-party server. However, the third-party server disconnects and we are unable to establish a connection with it, which results in an error. In this case, the previously stored form data should be deleted from the database (void should be informed), and the application should inform the user of the system error. +- Ensure that the application can recover from errors: we know that it's difficult for any program to guarantee 100% uptime, so we need to make provision for scenarios where our programs fail. For instance if our program crashes, we first need to log the error, notify the relevant parties involved, then immediately get the program up and running again. This way, our application can continue to provide services while a system administrator investigates and fixes the cause of the problem. ## How to handle errors -Error Handling In fact, we have eleven chapters in the first section which has been how to design error handling, here we have another example from a detailed explanation about how to handle different errors: +In chapter 11, we addressed the process of error handling and design using some examples. Let's go into these examples in a bit more detail, and see some other error handling scenarios: -- Notify the user errors: +- Notify the user of errors: -Notify the user when accessing the page we can have two kinds of errors: 404.html and error.html, the following were the source of the error page displays: +When an error occurs, we can present the user accessing the page with two kinds of errors pages: 404.html and error.html. Here is an example of what the source code of an error page might look like: @@ -58,7 +59,7 @@ Notify the user when accessing the page we can have two kinds of errors: 404.htm -Another source: +Another example: @@ -86,7 +87,7 @@ Another source: -Error handling logic 404, an error if the system operation is similar, and we see that: +404 error-handling logic, in the occurrence of a system error: func (p *MyMux) ServeHTTP(w http.ResponseWriter, r *http.Request) { if r.URL.Path == "/" { @@ -113,9 +114,9 @@ Error handling logic 404, an error if the system operation is similar, and we se ## How to handle exceptions -We know that in many other languages ​​have try... catch keywords used to capture the unusual situation, but in fact, many errors can be expected to occur without the need for exception handling, should be handled as an error, which is why the Go language using the function returns an error of design, these functions do not panic, for example, if a file is not found, os.Open returns an error, it will not panic; if you break the network connection to a write data, net.Conn series Type the Write function returns an error, they do not panic. These states where such procedures are to be expected. You know that these operations might fail, an error is returned because the designer has used a clear indication of this. This is the above stated error can be expected to occur. +We know that many other languages have `try... catch` keywords used to capture the unusual circumstances, but in fact, many errors can be expected to occur without the need for exception handling, and can be instead treated as an errors. It's for this reason that Go functions return errors by design. For example, if a file is not found or if os.Open returns an error, these functions will not panic; as another example, if a network connection gets disconnected during a data write operation, the `net.Conn` family of `Write` functions will return errors instead of panicking. These error states are to be expected in most applications and Go particularly makes it explicit when operations might fail by returning error variables. Looking at the example above, we can clearly see the errors that can be expected to occur. -But there is a situation, there are some operations almost impossible to fail, and in certain cases there is no way to return an error and can not continue, such a situation should panic. For example: if a program to calculate x [j], but j bounds, and this part of the code will lead to panic, like this one unexpected fatal error will cause panic, by default it will kill the process, it this allows a part of the code that is running from the error occurred panic goroutine recover operation, the occurrence of panic, the function of this part of the code and the code behind not be executed, which is specially designed such that Go, as separate from the errors and abnormal, panic fact exception handling. The following code, we expect to get the User via uid the username information, but if uid crossed the line will throw an exception, this time if we do not recover mechanism, the process will be killed, resulting in non- service program. So in order to process robustness, in some places need to establish recover mechanism. +There are, however, cases where `panic` should be used. For instance in operations where failure is almost impossible, or in certain situations where there is no way to return an error and the operation cannot continue, `panic` should be used. Take for example a program that tries to obtain the value of an array at x[j], but the index j is out of bounds. This part of the code will cause the program to panic, as will other critical, unexpected errors of this nature. By default, panicking will kill off the offending process (goroutine), allowing the code which dispatched the goroutine an opportunity to recover from the error. This way, the function in which the error occurred as well as all subsequent code after it will not continue to execute. Go's `panic` was deliberately designed with this behavior in mind, which is different than typical error handling; `panic` is really just exception handling. In the example below, we expect that `User[UID]` will return a username from the `User` array, but the UID that we use is out of bounds and throws an exception. If we do not have a recovery mechanism to deal with this immediately, the process will be killed, and the panic will propagate up the stack until our program finally crashes. In order for our application to be robust and resilient to these kinds of runtime errors, we need to implement recovery mechanisms in certain places. func GetUser(uid int) (username string) { defer func() { @@ -128,11 +129,11 @@ But there is a situation, there are some operations almost impossible to fail, a return } -The above describes the difference between errors and exceptions, so when we develop programs how to design it? The rules are simple: If you define a function may fail, it should return an error. When I call other package 's function, if this function is implemented well, I do not need to worry that it will panic, unless there is a true exception to happen, nor even that I should go with it. The panic and recover for its own development package which implements the logic to design for some special cases. +The above describes the differences between errors and exceptions. So, when it comes down to developing our Go applications, when do we use one or the other? The rules are simple: if you define a function that you anticipate might fail, then return an error variable. When calling another package's function, if it is implemented well, there should be no need to worry that it will panic unless a true exception has occurred (whether recovery logic has been implemented or not). Panic and recover should only be used internally inside packages to deal with special cases where the state of the program cannot be guaranteed, or when a programmer's error has occurred. Externally facing APIs should explicitly return error values. ## Summary -This section summarizes when we deployed Web application how to handle various error: network error, database error, operating system errors, etc., when an error occurs, our program how to correctly handle: Show friendly error interface, rollback, logging, notify the administrator and other operations, and finally explains how to correctly handle errors and exceptions. General program errors and exceptions easily confused, but errors and exceptions in Go is a clear distinction, so tell us in a program designed to handle errors and exceptions should follow the principles of how to. +This is section summarizes how web applications should handle various errors such as network, database and operating system errors, among others. We've outline several techniques to effectively deal with runtime errors such as: displaying user-friendly error notifications, rolling back actions, logging, and alerting system administrators. Finally, we explained how to correctly handle errors and exceptions. The concept of an error is often confused with that of an exception, however in Go, there is a clear distinction between the two. For this reason, we've discussed the principles of processing both errors and exceptions in web applications. ## Links diff --git a/en/12.3.md b/en/12.3.md index 0d439c39..8f46bd75 100644 --- a/en/12.3.md +++ b/en/12.3.md @@ -1,14 +1,14 @@ # 12.3 Deployment -After completion of the development program, we now want to deploy Web applications, but how do we deploy these applications do ? Because after compiling Go programs is an executable file, written a C program using daemon readers must know you can achieve the perfect background program runs continuously, but still not perfect at present Go realization daemon, therefore, for a Go application deployment, we can use third-party tools to manage, there are many third-party tools, such as Supervisord, upstart, daemon tools, etc. This section describes my own system using current tools Supervisord. +When our web application is finally production ready, what are the steps necessary to get it deployed? In Go, an executable file encapsulating our application is created after we compile our programs. Programs written in C can run perfectly as background daemon processes, however Go does not yet have native support for daemons. The good news is that we can use third party tools to help us manage the deployment of our Go applications, examples of which are Supervisord, upstart and daemontools, among others. This section will introduce you to some basics of the Supervisord process control system. -## Daemon +## Daemons -Currently Go programs can not be achieved daemon, see the Go detailed language bug: <`http://code.google.com/p/go/issues/detail?id=227`>, probably mean that it is difficult from the the use of existing fork a thread out, because there is no simple way to ensure that all the threads have used state consistency problem. +Currently, Go programs cannot cannot be run as daemon processes (for additional information, see the open issue on github [here](https://github.com/golang/go/issues/227)). It's difficult to fork existing threads in Go because there is no way of ensuring a consistent state in all threads that have been used. -But we can see some implementations daemon many online methods, such as the following two ways: +We can, however, see many attempts at implementing daemons online, such as in the two following ways; -- MarGo an implementation of the idea of using Command to run their own applications, if you really want to achieve, it is recommended that such programs +- MarGo one implementation of the concept of using `Command` to deploy applications. If you really want to daemonize your applications, it is recommended to use code similar to the following:
 	d := flag.Bool("d", false, "Whether or not to launch in the background(like a daemon)")
@@ -38,7 +38,7 @@ But we can see some implementations daemon many online methods, such as the foll
 	}
 
-- Another solution is to use the syscall, but this solution is not perfect: +- Another solution is to use `syscall`, but this solution is not perfect: ``` package main @@ -111,25 +111,25 @@ But we can see some implementations daemon many online methods, such as the foll } ``` -The above proposed two implementations Go's daemon program, but I still do not recommend you to realize this, because the official announcement has not officially support daemon, of course, the first option is more feasible for now, but it is currently open source library skynet in adopting this program do daemon. +While the two solutions above implement daemonization in Go, I still cannot recommend that you use either methods since there is no official support for daemons in Go. Notwithstanding this fact, the first option is the more feasible one, and is currently being used by some well-known open source projects like [skynet](https://github.com/skynetservices/skynet) for implementing daemons. ## Supervisord -Go has been described above, there are two options currently are to realize his daemon, but the government itself does not support this one, so it is recommended that you use sophisticated tools to manage our third-party applications, here I'll give you a present more extensive use of process management software: Supervisord. Supervisord is implemented in Python a very useful process management tool. supervisord will help you to manage applications into daemon process, but can be easily turned on via the command, shut down, restart, etc, and it is managed by the collapse will automatically restart once the process so that you can ensure that the program execution is interrupted in the case there are self-healing capabilities. +Above, we've looked at two schemes that are commonly used to implement daemons in Go, however both methods lack official support. So, it's recommended that you use a third-party tool to manage application deployment. Here we take a look at the Supervisord project, implemented in Python, which provides extensive tools for process management. Supervisord will help you to daemonize your Go applications, also allowing you to do things like start, shut down and restart your applications with some simple commands, among many other actions. In addition, Supervisord managed processes can automatically restart processes which have crashed, ensuring that programs can recover from any interruptions. -> I stepped in front of a pit in the application, because all applications are made Supervisord parent born, then when you change the operating system file descriptor after, do not forget to restart Supervisord, light restart following application program useless. I just had the system installed after the first installed Supervisord, then start the deployment process, modify the file descriptor, restart the program, that the file descriptor is already 100,000, but in fact Supervisord this time or the default 1024, led him to manage the process All descriptors is 1024. pressure after opening up the system to start a newspaper run out of file descriptors, search for a long time to find the pit. +> As an aside, I recently fell into a common pitfall while trying to deploy an application using Supervisord. All applications deployed using Supervisord are born out of the Supervisord parent process. When you change an operating system file descriptor, don't forget to completely restart Supervisord -simply restarting the application it is managing will not suffice. When I first deployed an application with Supervisord, I modified the default file descriptor field, changing the default number from 1024 to 100,000 and then restarting my application. In reality, Supervisord continued using only 1024 file descriptors to manage all of my application's processes. Upon deploying my application, the logger began reporting a lack of file descriptors! It was a long process finding and fixing this mistake, so beware! -### Supervisord installation +### Installing Supervisord -Supervisord can `sudo easy_install supervisor` installation, of course, can also Supervisord official website to download, unzip and go to the folder where the source code, run `setup.py install` to install. +Supervisord can easily be installed using `sudo easy_install supervisor`. Of course, there is also the option of directly downloading it from its official website, uncompressing it, going into the folder then running `setup.py install` to install it manually. -- Must be installed using easy_install setuptools +- If you're going the `easy_install` route, then you need to first install `setuptools` -Open the `http://pypi.python.org/pypi/setuptools# files`, depending on your system version of python download the appropriate file, and then execute `sh setuptoolsxxxx.egg`, so that you can use easy_install command to install Supervisord. +Go to `http://pypi.python.org/pypi/setuptools#files` and download the appropriate file, depending on your system's python version. Enter the directory and execute `sh setuptoolsxxxx.egg`. When then script is done, you'll be able to use the `easy_install` command to install Supervisord. -### Supervisord Configure +### Configuring Supervisord -Supervisord default configuration file path is `/etc/supervisord.conf`, through a text editor to modify this file, the following is a sample configuration file: +Supervisord's default configuration file path is `/etc/supervisord.conf`, and can be modified using a text editor. The following is what a typical configuration file may look like: ;/etc/supervisord.conf [unix_http_server] @@ -172,19 +172,18 @@ Supervisord default configuration file path is `/etc/supervisord.conf`, through ### Supervisord management -After installation is complete, there are two Supervisord available command line supervisor and supervisorctl, command explained as follows: +After installation is complete, two Supervisord commands become available to you on the command line: `supervisor` and `supervisorctl`. The commands are as follows: -- Supervisord, initial startup Supervisord, start, set in the configuration management process. -- Supervisorctl stop programxxx, stop one process(programxxx), programxxx for the [program: blogdemon] in configured value, this example is blogdemon. -- Supervisorctl start programxxx, start a process -- Supervisorctl restart programxxx, restarting a process -- Supervisorctl stop all, stop all processes, Note: start, restart, stop will not load the latest configuration files. -- Supervisorctl reload, load the latest configuration file, and press the new configuration to start, manage all processes. +- `supervisord`: initial startup, launch, and process configuration management. +- `supervisorctl stop programxxx`: stop the programxxx process, where programxxx is a value configured in your `supervisord.conf` file. For instance, if you have something like `[program: blogdemon]` configured, you would use the `supervisorctl stop blogdemon` command to kill the process. +- `supervisorctl start programxxx`: start the programxxx process +- `supervisorctl restart programxxx`: restart the programxxx process +- `supervisorctl stop all`: stop all processes; note: start, restart, stop will not load the latest configuration files. +- `supervisorctl reload`: load the latest configuration file, launch them, and manage all processes with the new configuration. ## Summary -This section we describe how to implement daemon of the Go, but due to the current lack of Go-daemon implementation, need to rely on third-party tools to achieve the application daemon management approach, so here describes a process using python to write management tools Supervisord by Supervisord can easily put our Go up and application management. - +In this section, we described how to implement daemons in Go. We learned that Go does not natively support daemons, and that we need to use third-party tools to help us manage them. One such tool is the Supervisord process control system which we can use to easily deploy and manage our Go programs. ## Links diff --git a/en/12.4.md b/en/12.4.md index 62e6098a..4ff8217e 100644 --- a/en/12.4.md +++ b/en/12.4.md @@ -1,106 +1,111 @@ # 12.4 Backup and recovery -This section we discuss another aspect of application management: production server data backup and recovery. We often encounter the production server network is broken, bad hard drive, operating system crash, or if the database is unavailable a variety of unusual circumstances, so maintenance personnel need to produce applications and data on the server to do remote disaster recovery, cold prepare hot standby ready. In the next presentation, explained how the backup application, how to backup/restore MySQL database and Redis databases. +In this section, we'll discuss another aspect of application management: data backup and recovery on production servers. We often encounter situations where production servers don't behave as as we expect them to. Server network outages, hard drive malfunctions, operating system crashes and other similar events can cause databases to become unavailable. The need to recover from these types of events has led to the emergence of many cold standby/hot standby tools that can help to facilitate disaster recovery remotely. In this section, we'll explain how to backup deployed applications in addition to backing up and restoring any MySQL and Redis databases you might be using. ## Application Backup -In most cluster environment, Web applications, the basic need for backup, because this is actually a copy of the code, we are in the local development environment, or the version control system has to maintain the code. But many times, a number of development sites require users to upload files, then we need for these users to upload files for backup. In fact, now there is a suitable approach is to put the needs and site-related files stored on the storage to the cloud storage, so even if the system crashes, as long as we still cloud file storage, at least the data is not lost. +In most cluster environments, web applications do not need to be backed up since they are actually copies of code from our local development environment, or from a version control system. In many cases however, we need to backup data which has been supplied by the users of our site. For instance, when sites require users to upload files, we need to be able to backup any files that have been uploaded by users to our website. The current approach for providing this kind of redundancy is to utilize so-called cloud storage, where user files and other related resources are persisted into a highly available network of servers. If our system crashes, as long as user data has been persisted onto the cloud, we can at least be sure that no data will be lost. -If we do not adopt cloud storage case, how to do a backup site do ? Here we introduce a file synchronization tool rsync: rsync backup site can be achieved in different file system synchronization, If the windows, then, need windows version cwrsync. +But what about the cases where we did not backup our data to a cloud service, or where cloud storage was not an option? How do we backup data from our web applications then? Here, we describe a tool called rysnc, which can be commonly found on unix-like systems. Rsync is a tool which can be used to synchronize files residing on different systems, and a perfect use-case for this functionality is to keep our website backed up. + +> Note: Cwrsync is an implementation of rsync for the Windows environment ### Rsync installation -rsync 's official website: http://rsync.samba.org/can get the latest version from the above source. Of course, because rsync is a very useful software, so many Linux distributions will include it, including the. +You can find the latest version of rsync from its [official website](http://rsync.samba.org/can). Of course, because rsync is very useful software, many Linux distributions will already have it installed by default. -Package Installation +Package Installation: # sudo apt-get install rsync ; Note: debian, ubuntu and other online installation methods ; # yum install rsync ; Note: Fedora, Redhat, CentOS and other online installation methods ; # rpm -ivh rsync ; Note: Fedora, Redhat, CentOS and other rpm package installation methods ; -Other Linux distributions, please use the appropriate package management methods to install. Installing source packages +For the other Linux distributions, please use the appropriate package management methods to install it. Alternatively, you can build it yourself from the source: tar xvf rsync-xxx.tar.gz cd rsync-xxx ./configure - prefix =/usr; make; make install - + +> Note: If want to compile and install the rsync from its source, you have to install gcc compiler tools such as job. +
Note: Before using source packages compiled and installed, you have to install gcc compiler tools such as job
-### Rsync Configure +### Rsync Configuration -rsync mainly in the following three configuration files rsyncd.conf( main configuration file ), rsyncd.secrets( password file ), rsyncd.motd(rysnc server information ). +Rsync can be configured from three main configuration files: `rsyncd.conf` which is the main configuration file, `rsyncd.secrets` which holds passwords, and `rsyncd.motd` which contains server information. -Several documents about this configuration we can refer to the official website or other websites rsync introduction, here the server and client how to open +You can refer to the official documentation on rsync's website for more detailed explanations, but here we will simply introduce the basics of setting up rsync:. -- Services client opens: +- Starting an rsync daemon server-side: `# /usr/bin/rsync --daemon --config=/etc/rsyncd.conf` -- daemon parameter approach is to run rsync in server mode. Join the rsync boot +- the `--daemon` parameter is for running rsync in server mode. Make this the default boot-time setting by joining it to the `rc.local` file: - `echo 'rsync - daemon' >> /etc/rc.d/rc.local` + `echo 'rsync --daemon' >> /etc/rc.d/rc.local` -Set rsync password +Setup an rsync username and password, making sure that it's owned only by root, so that local unauthorized users or exploits do not have access to it. If these permissions are not set correctly, rsync may not boot: echo 'Your Username: Your Password' > /etc/rsyncd.secrets chmod 600 /etc/rsyncd.secrets - Client synchronization: -Clients can use the following command to synchronize the files on the server: +Clients can synchronize server files with the following command: - rsync -avzP --delete --password-file=rsyncd.secrets username@192.168.145.5::www/var/rsync/backup + rsync -avzP --delete --password-file=rsyncd.secrets username@192.168.145.5::www /var/rsync/backup -This command, briefly explain a few points: +Let's break this down into a few key points: -1. `-avzP` is what the reader can use the `-help` Show -2. `-delete` for example A, deleted a file, the time synchronization, B will automatically delete the corresponding files -3. `-Password-file` client/etc/rsyncd.secrets set password, and server to `/etc/rsyncd.secrets` the password the same, so cron is running, you do not need the password -4. This command in the " User Name" for the service side of the `/etc/rsyncd.secrets` the user name -5. This command 192.168.145.5 as the IP address of the server -6. :: www, note the two: number, www as a server configuration file `/etc/rsyncd.conf` in [www], meaning that according to the service on the client `/etc/rsyncd.conf` to synchronize them [www] paragraph, a: number, when used according to the configuration file does not directly specify the directory synchronization. +1. `-avzP` are some common options. Use `rsync --help` to review what these do. +2. `--delete` deletes extraneous files on the receiving side. For example, if files are deleted on the sending side, the next time the two machines are synchronized, the receiving sides will automatically delete the corresponding files. +3. `--password-file` specifies a password file for accessing an rsync daemon. On the client side, this is typically the `client/etc/rsyncd.secrets` file, and on the server side, it's `/etc/rsyncd.secrets`. When using something like Cron to automate rsync, you won't need to manually enter a password. +4. `username` specifies the username to be used in conjunction with the server-side `/etc/rsyncd.secrets` password +5. `192.168.145.5` is the IP address of the server +6. `::www` (note the double colons), specifies contacting an rsync daemon directly via TCP for synchronizing the `www` module according to the server-side configurations located in `/etc/rsyncd.conf`. When only a single colon is used, the rsync daemon is not contacted directly; instead, a remote-shell program such as ssh is used as the transport . -In order to synchronize real-time, you can set the crontab, keeping rsync synchronization every minute, of course, users can also set the level of importance depending on the file type of synchronization frequency. +In order to periodically synchronize files, you can set up a crontab file that will run rsync commands as often as needed. Of course, users can vary the frequency of synchronization according to how critical it is to keep certain directories or files up to date. ## MySQL backup -MySQL database application is still the mainstream, the current MySQL backup in two ways: hot backup and cold backup, hot backup is currently mainly used master/slave mode (master/slave) mode is mainly used for database synchronization separate read and write, but also can be used for hot backup data ), on how to configure this information, we can find a lot. Cold backup data, then that is a certain delay, but you can guarantee that the time period before data integrity, such as may sometimes be caused by misuse of our loss of data, then the master/slave model is able to retrieve lost data, but through cold backup can partially restore the data. +MySQL databases are still the mainstream, go-to solution for most web applications. The two most common methods of backing up MySQL databases are hot backups and cold backups. Hot backups are usually used with systems set up in a master/slave configuration to backup live data (the master/slave synchronization mode is typically used for separating database read/write operations, but can also be used for backing up live data). There is a lot of information available online detailing the various ways one can implement this type of scheme. For cold backups, incoming data is not backed up in real-time as is the case with hot backups. Instead, data backups are performed periodically. This way, if the system fails, the integrity of data before a certain period of time can still be guaranteed. For instance, in cases where a system malfunction causes data to be lost and the master/slave model is unable to retrieve it, cold backups can be used for a partial restoration. -Cold backup shell script is generally used to achieve regular backup of the database, and then rsync synchronization through the above described non-local one server room. +A shell script is generally used to implement regular cold backups of databases, executing synchronization tasks using rsync in a non-local mode. -The following is a scheduled backup MySQL backup script, we use the mysqldump program, this command can be exported to a database file. + +The following is an example of a backup script that performs scheduled backups for a MySQL database. We use the `mysqldump` program which allows us to export the database to a file. #!/bin/bash - # The following configuration information, modify their own + # Configuration information; modify it as needed mysql_user="USER" #MySQL backup user mysql_password="PASSWORD" # MySQL backup user's password mysql_host="localhost" mysql_port="3306" - mysql_charset="utf8" # MySQL coding - backup_db_arr=("db1" "db2") # To back up the database name, separated by spaces separated by a plurality of such("db1" "db2" "db3") - backup_location=/var/www/mysql # backup data storage location, please do not end with a "/", this can keep the default, the program will automatically create a folder - expire_backup_delete="ON" # delete outdated backups is turned OFF to ON ON to OFF - expire_days=3 # default expiration time for the three days the number of days, this is only valid when the expire_backup_delete open + mysql_charset="utf8" # MySQL encoding + backup_db_arr=("db1" "db2") # Name of the database to be backed up, separating multiple databases wih spaces ("DB1", "DB2" db3 ") + backup_location=/var/www/mysql # Backup data storage location; please do not end with a "/" and leave it at its default, for the program to automatically create a folder + expire_backup_delete="ON" # Whether to delete outdated backups or not + expire_days=3 # Set the expiration time of backups, in days (defaults to three days); this is only valid when the `expire_backup_delete` option is "ON" - # We do not need to modify the following start - backup_time=`date +%Y%m%d%H%M` # define detailed time backup - backup_Ymd=`date +%Y-%m-%d` # define the backup directory date time + # We do not need to modify the following initial settings below + backup_time=`date +%Y%m%d%H%M` # Define the backup time format + backup_Ymd=`date +%Y-%m-%d` # Define the backup directory date time backup_3ago=`date-d '3 days ago '+%Y-%m-%d` # 3 days before the date - backup_dir=$backup_location/$backup_Ymd # full path to the backup folder - welcome_msg="Welcome to use MySQL backup tools!" # greeting + backup_dir=$backup_location/$backup_Ymd # Full path to the backup folder + welcome_msg="Welcome to use MySQL backup tools!" # Greeting - # Determine whether to start MYSQL, mysql does not start the backup exit + # Determine whether to MySQL is running; if not, then abort the backup mysql_ps=`ps-ef | grep mysql | wc-l` mysql_listen=`netstat-an | grep LISTEN | grep $mysql_port | wc-l` if [[$mysql_ps==0]-o [$mysql_listen==0]]; then - echo "ERROR: MySQL is not running! backup stop!" + echo "ERROR: MySQL is not running! backup aborted!" exit else echo $welcome_msg fi - # Connect to mysql database, can not connect to the backup exit + # Connect to the mysql database; if a connection cannot be made, abort the backup mysql-h $mysql_host-P $mysql_port-u $mysql_user-p $mysql_password << end use mysql; select host, user from user where user='root' and host='localhost'; @@ -109,11 +114,11 @@ The following is a scheduled backup MySQL backup script, we use the mysqldump pr flag=`echo $?` if [$flag!="0"]; then - echo "ERROR: Can't connect mysql server! backup stop!" + echo "ERROR: Can't connect mysql server! backup aborted!" exit else echo "MySQL connect ok! Please wait......" - # Judgment does not define the backup database, if you define a backup is started, otherwise exit the backup + # Determine whether a backup database is defined or not. If so, begin the backup; if not, then abort if ["$backup_db_arr"!=""]; then # dbnames=$(cut-d ','-f1-5 $backup_database) # echo "arr is(${backup_db_arr [@]})" @@ -124,59 +129,62 @@ The following is a scheduled backup MySQL backup script, we use the mysqldump pr `mysqldump -h $mysql_host -P $mysql_port -u $mysql_user -p $mysql_password $dbname - default-character-set=$mysql_charset | gzip> $backup_dir/$dbname -$backup_time.sql.gz` flag=`echo $?` if [$flag=="0"]; then - echo "database $dbname success backup to $backup_dir/$dbname-$backup_time.sql.gz" + echo "database $dbname successfully backed up to $backup_dir/$dbname-$backup_time.sql.gz" else - echo "database $dbname backup fail!" + echo "database $dbname backup has failed!" fi done else - echo "ERROR: No database to backup! backup stop" + echo "ERROR: No database to backup! backup aborted!" exit fi - # If you open the delete expired backup, delete operation + # If deleting expired backups is enabled, delete all expired backups if ["$expire_backup_delete"=="ON" -a "$backup_location"!=""]; then # `find $backup_location/-type d -o -type f -ctime + $expire_days-exec rm -rf {} \;` `find $backup_location/ -type d -mtime + $expire_days | xargs rm -rf` echo "Expired backup data delete complete!" fi - echo "All database backup success! Thank you!" + echo "All databases have been successfully backed up! Thank you!" exit fi -Modify shell script attributes: +Modify the properties of the shell script like so: chmod 600 /root/mysql_backup.sh chmod +x /root/mysql_backup.sh -Set attributes, add the command crontab, we set up regular automatic backups every day 00:00, then the backup script directory/var/www/mysql directory is set to rsync synchronization. +Then add the crontab command: 00 00 *** /root/mysql_backup.sh +This sets up regular backups of your databases to the `/var/www/mysql` directory every day at 00:00, which can then be synchronized using rsync. + ## MySQL Recovery -Earlier MySQL backup into hot backup and cold backup, hot backup main purpose is to be able to recover in real time, such as an application server hard disk failure occurred, then we can modify the database configuration file read and write into slave so that you can minimize the time interrupt service. +We've just described some commonly used backup techniques for MySQL, namely hot backups and cold backups. To recap, the main goal of a hot backup is to be able to recover data in real-time after an application has failed in some way, such as in the case of a server hard-disk malfunction. We learned that this type of scheme can be implemented by modifying database configuration files so that databases are replicated onto a slave, minimizing interruption to services. But sometimes we need to perform a cold backup of the SQL data recovery, as with database backup, you can import through the command: +Hot backups are, however, sometimes inadequate. There are certain situations where cold backups are required to perform data recovery, even if it's only a partial one. When you have a cold backup of your database, you can use the following `MySQL` command to import it: mysql -u username -p databse < backup.sql -You can see, export and import database data is fairly simple, but if you also need to manage permissions, or some other character set, it may be a little more complicated, but these can all be done through a number of commands. +As you can see, importing and exporting database is a fairly simple matter. If you need to manage administrative privileges or deal with different character sets, this process may become a little more complicated, though there are a number of commands which will help you to do this. ## Redis backup -Redis is our most used NoSQL, its backup is also divided into two kinds: hot backup and cold backup, Redis also supports master/slave mode, so our hot backup can be achieved in this way, we can refer to the corresponding configuration the official document profiles, quite simple. Here we introduce cold backup mode: Redis will actually timed inside the memory cache data saved to the database file inside, we just backed up the corresponding file can be, is to use rsync backup to a previously described non-local machine room can be achieved. +Redis is one of the most popular NoSQL databases, and both hot and cold backup techniques can also be used in systems which use it. Like MySQL, Redis also supports master/slave mode, which is ideal for implementing hot backups (refer to Redis' official documentation to learn learn how to configure this; the process is very straightforward). As for cold backups, Redis routinely saves cached data in memory to the database file on-disk. We can simply use the rsync backup method described above to synchronize it with a non-local machine. ## Redis recovery -Redis Recovery divided into hot and cold backup recovery backup and recovery, hot backup and recovery purposes and methods of recovery with MySQL, as long as the modified application of the corresponding database connection. +Similarly, Redis recovery can be divided into hot and cold backup recovery. The methods and objectives of recovering data from a hot backup of a Redis database are the same as those mentioned above for MySQL, as long as the Redis application is using the appropriate database connection. -But sometimes we need to cold backup to recover data, Redis cold backup and recovery is actually just put the saved database file copy to Redis working directory, and then start Redis on it, Redis at boot time will be automatically loaded into the database file memory, the start speed of the database to determine the size of the file. +A Redis cold backup recovery simply involves copying backed-up database files into the working directory, then starting Redis on it. The database files are automatically loaded into memory at boot time; the speed with which Redis boots will depend on the size of the database files. ## Summary -This section describes the application of part of our backup and recovery, that is, how to do disaster recovery, including file backup, database backup. Also introduced different systems using rsync file synchronization, MySQL database and Redis database backup and recovery, hope that through the introduction of this section, you can give as a developer of products for online disaster recovery program provides a reference solution. +In this section, we looked at some techniques for backing up data as well as recovering from disasters which may occur after deploying our applications. We also introduced rsync, a tool which can be used to synchronize files on different systems. Using rsync, we can easily perform backup and restoration procedures for both MySQL and Redis databases, among others. We hope that by being introduced to some of these concepts, you will be able to develop disaster recovery procedures to better protect the data in your web applications. ## Links diff --git a/en/12.5.md b/en/12.5.md index 99065067..fa357814 100644 --- a/en/12.5.md +++ b/en/12.5.md @@ -1,20 +1,20 @@ # 12.5 Summary -This chapter discusses how to deploy and maintain Web applications we develop some related topics. The content is very important to be able to create a minimum maintenance based applications running smoothly, we must consider these issues. +In this chapter, we discussed how to deploy and maintain our Go web applications. We also looked at some closely related topics which can help us to keep them running smoothly, with minimal maintenance. -Specifically, the discussion in this chapter include: +Specifically, we looked at: -- Create a robust logging system that can record an error in case of problems and notify the system administrator -- Handle runtime errors that may occur, including logging, and how to display to the user-friendly system there is a problem -- Handling 404 errors, telling the user can not find the requested page -- Deploy applications to a production environment (including how to deploy updates) +- Creating a robust logging system capable of recording errors, and notifying system administrators +- Handling runtime errors that may occur, including logging them, and how to relay this information in a user-friendly manner that there is a problem +- Handling 404 errors and notifying users that the requested page cannot be found +- Deploying applications to a production environment (including how to deploy updates) - How to deploy highly available applications -- Backup and restore files and databases +- Backing up and restoring files and databases -After reading this chapter, for the development of a Web application from scratch, those issues need to be considered, you should already have a comprehensive understanding. This chapter will help you in the actual environment management in the preceding chapter describes the development of the code. +After reading the contents of this chapter, those thinking about developing a web application from scratch should already have the full picture on how to do so; this chapter provided an introduction on how to manage deployment environments, while previous chapters have focused on the development of code. ## Links - [Directory](preface.md) - Previous section: [Backup and recovery](12.4.md) -- Next chapter: [Build a web framework](13.0.md) +- Next chapter: [Building a web framework](13.0.md)