diff --git a/README.md b/README.md
index 64eb7522..82933c20 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-# Multi Language Version
+# Multiple Language Versions
* [English](en/)
* [French](fr/)
* [Spanish](es/)
diff --git a/en/10.0.md b/en/10.0.md
index 366edb84..17e5911c 100644
--- a/en/10.0.md
+++ b/en/10.0.md
@@ -21,4 +21,4 @@ In the first section, we'll describe how to detect and set the correct locale in
- [Directory](preface.md)
- Previous Chapter: [Chapter 9 Summary](09.7.md)
-- Next section: [Time zone](10.1.md)
+- Next section: [Setting the default region](10.1.md)
diff --git a/en/10.1.md b/en/10.1.md
index f1704799..3c64667a 100644
--- a/en/10.1.md
+++ b/en/10.1.md
@@ -1,8 +1,8 @@
-# 10.1 Time zones
+# 10.1 Setting the default region
## Finding out the locale
-A locale is a set of descriptors for a particular geographical region, and can include specific language habits, text formatting, cultural idioms and a multitude of other settings. A locale's name is usually composed of three parts. First (and mandatory) is the locale's language abbreviation, such as "en" for English or "zh" for Chinese. The second part is an optional country specifier, and follows the first with an underscore. This specifier allows web applications to distinguish between different countries which speak the same language, such as "en_US" for U.S. English, and "en_UK" for British English. The last part is another optional specifier, and is added to the locale with a period. It specifies which character set to use, for instance "zh_CN.gb2312" specifies the gb2312 character set for Chinese.
+A locale is a set of descriptors for a particular geographical region, and can include specific language habits, text formatting, cultural idioms and a multitude of other settings. A locale's name is usually composed of three parts. First (and mandatory) is the locale's language abbreviation, such as "en" for English or "zh" for Chinese. The second part is an optional country specifier, and follows the first with an underscore. This specifier allows web applications to distinguish between different countries which speak the same language, such as "en_US" for U.S. English, and "en_GB" for British English. The last part is another optional specifier, and is added to the locale with a period. It specifies which character set to use, for instance "zh_CN.gb2312" specifies the gb2312 character set for Chinese.
Go defaults to the "UTF-8" encoding set, so i18n in Go applications do not need to consider the last parameter. Thus, in our examples, we'll only use the first two parts of locale descriptions as our standard i18n locale names.
diff --git a/en/10.2.md b/en/10.2.md
index c0a9349b..b9b7714b 100644
--- a/en/10.2.md
+++ b/en/10.2.md
@@ -136,5 +136,5 @@ This section described how to use and store local resources. We learned that we
## Links
- [Directory](preface.md)
-- Previous section: [Time zone](10.1.md)
+- Previous section: [Setting the default region](10.1.md)
- Next section: [[International sites](10.3.md)
diff --git a/en/12.0.md b/en/12.0.md
index ca0b2827..67cccc87 100644
--- a/en/12.0.md
+++ b/en/12.0.md
@@ -1,8 +1,6 @@
# 12 Deployment and maintenance
-So far, we have already described how to develop programs, debugger, and test procedures, as is often said: the development of the last 10% takes 90% of the time, so this chapter we will emphasize this last part of the 10% to truly become the most trusted and used by outstanding application, you need to consider some of the details of the above-mentioned 10% refers to these small details.
-
-In this chapter we will be four sections to introduce these small details of the deal, the first section describes how to program production services recorded on the log, how to logging, an error occurs a second section describes how to deal with our program, how to ensure as far as possible less impact to the user's access to, and the third section describes how to deploy Go standalone program, due to the current Go programs that can not be written as C daemon, then the process of how we manage the program running in the background so it? The fourth section describes the application of data backup and recovery, try to ensure that applications crash can maintain data integrity.
+So far, we've covered the basics of developing, debugging and testing web applications in Go. As is often said, however: the last 10% of development takes 90% of the time. In this chapter, we will be emphasizing this last 10% of application development in order to truly craft reliable and high quality web applications. In the first section, we will examine how production services generate logs, and the process of logging itself. The second section will describe dealing with runtime errors, and how to manage them when they occur so that the impact on end users is minimized. In the third section, we tackle the subject of deploying standalone Go programs, which can be tricky at first. As you might know, Go programs cannot be written with daemons like you would with a language such as C. We'll discuss how background processes are typically managed in Go. Finally, our fourth and last section will address the process of backing up and recovering application data in Go. We'll take a look at some techniques for ensuring that in the event of a crash, we will be able to maintain the integrity of our data.
## Links
diff --git a/en/12.1.md b/en/12.1.md
index 1a109822..93db9b7d 100644
--- a/en/12.1.md
+++ b/en/12.1.md
@@ -1,15 +1,15 @@
# 12.1 Logs
-We look forward to developing Web applications able to run the entire program of various events occurring during record each, Go language provides a simple log package, we use the package can easily achieve logging features, these log are based on the combined package fmt print function like panic for general printing, throwing error handling. Go current standard package only contains a simple function, if we want our application log to a file, and then to combine log achieve a lot of complex functions( written a Java or C++, the reader should have used log4j and log4cpp like logging tool ), you can use third-party developers a logging system, `https://github.com/cihub/seelog`, which implements a very powerful logging functions. Next, we describe how the system through the log to achieve our application log function.
+We want to build web applications that can keep track of events which have occurred throughout execution, combining them all into one place for easy access later on, when we inevitably need to perform debugging or optimization tasks. Go provides a simple `log` package which we can use to help us implement simple logging functionality. Logs can be printed using Go's `fmt` package, called inside error handling functions for general error logging. Go's standard package only contains basic functionality for logging, however. There are many third party logging tools that we can use to supplement it if your needs are more sophisticated (tools similar to log4j and log4cpp, if you've ever had to deal with logging in Java or C++). A popular and fully featured, open-source logging tool in Go is the [seelog](https://github.com/cihub/seelog) logging framework. Let's take a look at how we can use `seelog` to perform logging in our Go applications.
-## Seelog Introduction
+## Introduction to seelog
-seelog with Go language of a logging system, it provides some simple function to implement complex log distribution, filtering, and formatting. Has the following main features:
+Seelog is a logging framework for Go that provides some simple functionality for implementing logging tasks such as filtering and formatting. Its main features are as follows:
-- XML dynamic configuration, you can not recompile the program and dynamic load configuration information
+- Dynamic configuration via XML; you can load configuration parameters dynamically without recompiling your program
- Supports hot updates, the ability to dynamically change the configuration without the need to restart the application
-- Support multi- output stream that can simultaneously output the log to multiple streams, such as a file stream, network flow, etc.
-- Support for different log output
+- Supports multi-output streams that can simultaneously pipe log output to multiple streams, such as a file stream, network flow, etc.
+- Support for different log outputs
- Command line output
- File Output
@@ -17,13 +17,13 @@ seelog with Go language of a logging system, it provides some simple function to
- Support log rotate
- SMTP Mail
-The above is only a partial list of features, seelog is a particularly powerful log processing systems, see the detailed contents of the official wiki. Next, I will briefly describe how to use it in your project:
+The above is only a partial list of seelog's features. To fully take advantage of all of seelog's functionality, have a look at its [official wiki](https://github.com/cihub/seelog/wiki) which thoroughly documents what you can do with it. Let's see how we'd use seelog in our projects:
-First install seelog
+First install seelog:
go get -u github.com/cihub/seelog
-Then we look at a simple example:
+Then let's write a simple example:
package main
@@ -34,12 +34,11 @@ Then we look at a simple example:
log.Info("Hello from Seelog!")
}
+Compile and run the program. If you see a `Hello from seelog` in your application log, seelog has been successfully installed and is running operating normally.
-When compiled and run if there is a `Hello from seelog`, description seelog logging system has been successfully installed and can be a normal operation.
+## Custom log processing with seelog
-## Based seelog custom log processing
-
-seelog support custom log processing, the following is based on its custom log processing part of the package:
+Seelog supports custom log processing. The following code snippet is based on the its custom log processing part of its package:
package logs
@@ -97,31 +96,33 @@ seelog support custom log processing, the following is based on its custom log p
}
-Above the main achievement of the three functions,
+The above implements the three main functions:
- `DisableLog`
-Logger initialize global variables as seelog disabled state, mainly in order to prevent the Logger was repeatedly initialized
+Initializes a global variable `Logger` with seelog disabled, mainly in order to prevent the logger from being repeatedly initialized
+
- `LoadAppConfig`
-Depending on the configuration file to initialize seelog configuration information, where we read the configuration file by string set up, of course, you can read the XML file. Inside the configuration is as follows:
+Initializes the configuration settings of seelog according to a configuration file. In our example we are reading the configuration from an in-memory string, but of course, you can read it from an XML file also. Inside the configuration, we set up the following parameters:
- Seelog
-minlevel parameter is optional, if configured, is higher than or equal to the level of the log will be recorded, empathy maxlevel.
+The `minlevel` parameter is optional. If configured, logging levels which are greater than or equal to the specified level will be recorded. The optional `maxlevel` parameter is similarly used to configure the maximum logging level desired.
+
- Outputs
-Output destination, where data is divided into two, one record to the log rotate file inside. Another set up a filter, if the error level is critical, it will send alarm messages.
+Configures the output destination. In our particular case, we channel our logging data into two output destinations. The first is a rolling log file where we continuously save the most recent window of logging data. The second destination is a filtered log which records only critical level errors. We additionally configure it to alert us via email when these types of errors occur.
- Formats
-Log format defines various
+Defines the various logging formats. You can use custom formatting, or predefined formatting -a full list of predefined formats can be found on seelog's [wiki](https://github.com/cihub/seelog/wiki/Format-reference)
- `UseLogger`
-Set the current logger for the corresponding log processing
+Set the current logger as our log processor
-Above we defined a custom log processing package, the following example is to use:
+Above, we've defined and configured a custom log processing package. The following code demonstrates how we'd use it:
package main
@@ -139,36 +140,38 @@ Above we defined a custom log processing package, the following example is to us
logs.Logger.Critical("Server err:%v", err)
}
-## An error occurred mail
+## Email notifications
-The above example explains how to set up email, we adopted the following smtp configured to send e-mail:
+The above example explains how to set up email notifications with `seelog`. As you can see, we used the following `smtp` configuration:
d := flag.Bool("d", false, "Whether or not to launch in the background(like a daemon)")
@@ -38,7 +38,7 @@ But we can see some implementations daemon many online methods, such as the foll
}
-- Another solution is to use the syscall, but this solution is not perfect:
+- Another solution is to use `syscall`, but this solution is not perfect:
```
package main
@@ -111,25 +111,25 @@ But we can see some implementations daemon many online methods, such as the foll
}
```
-The above proposed two implementations Go's daemon program, but I still do not recommend you to realize this, because the official announcement has not officially support daemon, of course, the first option is more feasible for now, but it is currently open source library skynet in adopting this program do daemon.
+While the two solutions above implement daemonization in Go, I still cannot recommend that you use either methods since there is no official support for daemons in Go. Notwithstanding this fact, the first option is the more feasible one, and is currently being used by some well-known open source projects like [skynet](https://github.com/skynetservices/skynet) for implementing daemons.
## Supervisord
-Go has been described above, there are two options currently are to realize his daemon, but the government itself does not support this one, so it is recommended that you use sophisticated tools to manage our third-party applications, here I'll give you a present more extensive use of process management software: Supervisord. Supervisord is implemented in Python a very useful process management tool. supervisord will help you to manage applications into daemon process, but can be easily turned on via the command, shut down, restart, etc, and it is managed by the collapse will automatically restart once the process so that you can ensure that the program execution is interrupted in the case there are self-healing capabilities.
+Above, we've looked at two schemes that are commonly used to implement daemons in Go, however both methods lack official support. So, it's recommended that you use a third-party tool to manage application deployment. Here we take a look at the Supervisord project, implemented in Python, which provides extensive tools for process management. Supervisord will help you to daemonize your Go applications, also allowing you to do things like start, shut down and restart your applications with some simple commands, among many other actions. In addition, Supervisord managed processes can automatically restart processes which have crashed, ensuring that programs can recover from any interruptions.
-> I stepped in front of a pit in the application, because all applications are made Supervisord parent born, then when you change the operating system file descriptor after, do not forget to restart Supervisord, light restart following application program useless. I just had the system installed after the first installed Supervisord, then start the deployment process, modify the file descriptor, restart the program, that the file descriptor is already 100,000, but in fact Supervisord this time or the default 1024, led him to manage the process All descriptors is 1024. pressure after opening up the system to start a newspaper run out of file descriptors, search for a long time to find the pit.
+> As an aside, I recently fell into a common pitfall while trying to deploy an application using Supervisord. All applications deployed using Supervisord are born out of the Supervisord parent process. When you change an operating system file descriptor, don't forget to completely restart Supervisord -simply restarting the application it is managing will not suffice. When I first deployed an application with Supervisord, I modified the default file descriptor field, changing the default number from 1024 to 100,000 and then restarting my application. In reality, Supervisord continued using only 1024 file descriptors to manage all of my application's processes. Upon deploying my application, the logger began reporting a lack of file descriptors! It was a long process finding and fixing this mistake, so beware!
-### Supervisord installation
+### Installing Supervisord
-Supervisord can `sudo easy_install supervisor` installation, of course, can also Supervisord official website to download, unzip and go to the folder where the source code, run `setup.py install` to install.
+Supervisord can easily be installed using `sudo easy_install supervisor`. Of course, there is also the option of directly downloading it from its official website, uncompressing it, going into the folder then running `setup.py install` to install it manually.
-- Must be installed using easy_install setuptools
+- If you're going the `easy_install` route, then you need to first install `setuptools`
-Open the `http://pypi.python.org/pypi/setuptools# files`, depending on your system version of python download the appropriate file, and then execute `sh setuptoolsxxxx.egg`, so that you can use easy_install command to install Supervisord.
+Go to `http://pypi.python.org/pypi/setuptools#files` and download the appropriate file, depending on your system's python version. Enter the directory and execute `sh setuptoolsxxxx.egg`. When then script is done, you'll be able to use the `easy_install` command to install Supervisord.
-### Supervisord Configure
+### Configuring Supervisord
-Supervisord default configuration file path is `/etc/supervisord.conf`, through a text editor to modify this file, the following is a sample configuration file:
+Supervisord's default configuration file path is `/etc/supervisord.conf`, and can be modified using a text editor. The following is what a typical configuration file may look like:
;/etc/supervisord.conf
[unix_http_server]
@@ -172,19 +172,18 @@ Supervisord default configuration file path is `/etc/supervisord.conf`, through
### Supervisord management
-After installation is complete, there are two Supervisord available command line supervisor and supervisorctl, command explained as follows:
+After installation is complete, two Supervisord commands become available to you on the command line: `supervisor` and `supervisorctl`. The commands are as follows:
-- Supervisord, initial startup Supervisord, start, set in the configuration management process.
-- Supervisorctl stop programxxx, stop one process(programxxx), programxxx for the [program: blogdemon] in configured value, this example is blogdemon.
-- Supervisorctl start programxxx, start a process
-- Supervisorctl restart programxxx, restarting a process
-- Supervisorctl stop all, stop all processes, Note: start, restart, stop will not load the latest configuration files.
-- Supervisorctl reload, load the latest configuration file, and press the new configuration to start, manage all processes.
+- `supervisord`: initial startup, launch, and process configuration management.
+- `supervisorctl stop programxxx`: stop the programxxx process, where programxxx is a value configured in your `supervisord.conf` file. For instance, if you have something like `[program: blogdemon]` configured, you would use the `supervisorctl stop blogdemon` command to kill the process.
+- `supervisorctl start programxxx`: start the programxxx process
+- `supervisorctl restart programxxx`: restart the programxxx process
+- `supervisorctl stop all`: stop all processes; note: start, restart, stop will not load the latest configuration files.
+- `supervisorctl reload`: load the latest configuration file, launch them, and manage all processes with the new configuration.
## Summary
-This section we describe how to implement daemon of the Go, but due to the current lack of Go-daemon implementation, need to rely on third-party tools to achieve the application daemon management approach, so here describes a process using python to write management tools Supervisord by Supervisord can easily put our Go up and application management.
-
+In this section, we described how to implement daemons in Go. We learned that Go does not natively support daemons, and that we need to use third-party tools to help us manage them. One such tool is the Supervisord process control system which we can use to easily deploy and manage our Go programs.
## Links
diff --git a/en/12.4.md b/en/12.4.md
index 62e6098a..4ff8217e 100644
--- a/en/12.4.md
+++ b/en/12.4.md
@@ -1,106 +1,111 @@
# 12.4 Backup and recovery
-This section we discuss another aspect of application management: production server data backup and recovery. We often encounter the production server network is broken, bad hard drive, operating system crash, or if the database is unavailable a variety of unusual circumstances, so maintenance personnel need to produce applications and data on the server to do remote disaster recovery, cold prepare hot standby ready. In the next presentation, explained how the backup application, how to backup/restore MySQL database and Redis databases.
+In this section, we'll discuss another aspect of application management: data backup and recovery on production servers. We often encounter situations where production servers don't behave as as we expect them to. Server network outages, hard drive malfunctions, operating system crashes and other similar events can cause databases to become unavailable. The need to recover from these types of events has led to the emergence of many cold standby/hot standby tools that can help to facilitate disaster recovery remotely. In this section, we'll explain how to backup deployed applications in addition to backing up and restoring any MySQL and Redis databases you might be using.
## Application Backup
-In most cluster environment, Web applications, the basic need for backup, because this is actually a copy of the code, we are in the local development environment, or the version control system has to maintain the code. But many times, a number of development sites require users to upload files, then we need for these users to upload files for backup. In fact, now there is a suitable approach is to put the needs and site-related files stored on the storage to the cloud storage, so even if the system crashes, as long as we still cloud file storage, at least the data is not lost.
+In most cluster environments, web applications do not need to be backed up since they are actually copies of code from our local development environment, or from a version control system. In many cases however, we need to backup data which has been supplied by the users of our site. For instance, when sites require users to upload files, we need to be able to backup any files that have been uploaded by users to our website. The current approach for providing this kind of redundancy is to utilize so-called cloud storage, where user files and other related resources are persisted into a highly available network of servers. If our system crashes, as long as user data has been persisted onto the cloud, we can at least be sure that no data will be lost.
-If we do not adopt cloud storage case, how to do a backup site do ? Here we introduce a file synchronization tool rsync: rsync backup site can be achieved in different file system synchronization, If the windows, then, need windows version cwrsync.
+But what about the cases where we did not backup our data to a cloud service, or where cloud storage was not an option? How do we backup data from our web applications then? Here, we describe a tool called rysnc, which can be commonly found on unix-like systems. Rsync is a tool which can be used to synchronize files residing on different systems, and a perfect use-case for this functionality is to keep our website backed up.
+
+> Note: Cwrsync is an implementation of rsync for the Windows environment
### Rsync installation
-rsync 's official website: http://rsync.samba.org/can get the latest version from the above source. Of course, because rsync is a very useful software, so many Linux distributions will include it, including the.
+You can find the latest version of rsync from its [official website](http://rsync.samba.org/can). Of course, because rsync is very useful software, many Linux distributions will already have it installed by default.
-Package Installation
+Package Installation:
# sudo apt-get install rsync ; Note: debian, ubuntu and other online installation methods ;
# yum install rsync ; Note: Fedora, Redhat, CentOS and other online installation methods ;
# rpm -ivh rsync ; Note: Fedora, Redhat, CentOS and other rpm package installation methods ;
-Other Linux distributions, please use the appropriate package management methods to install. Installing source packages
+For the other Linux distributions, please use the appropriate package management methods to install it. Alternatively, you can build it yourself from the source:
tar xvf rsync-xxx.tar.gz
cd rsync-xxx
./configure - prefix =/usr; make; make install
-
+
+> Note: If want to compile and install the rsync from its source, you have to install gcc compiler tools such as job.
+
Note: Before using source packages compiled and installed, you have to install gcc compiler tools such as job-### Rsync Configure +### Rsync Configuration -rsync mainly in the following three configuration files rsyncd.conf( main configuration file ), rsyncd.secrets( password file ), rsyncd.motd(rysnc server information ). +Rsync can be configured from three main configuration files: `rsyncd.conf` which is the main configuration file, `rsyncd.secrets` which holds passwords, and `rsyncd.motd` which contains server information. -Several documents about this configuration we can refer to the official website or other websites rsync introduction, here the server and client how to open +You can refer to the official documentation on rsync's website for more detailed explanations, but here we will simply introduce the basics of setting up rsync:. -- Services client opens: +- Starting an rsync daemon server-side: `# /usr/bin/rsync --daemon --config=/etc/rsyncd.conf` -- daemon parameter approach is to run rsync in server mode. Join the rsync boot +- the `--daemon` parameter is for running rsync in server mode. Make this the default boot-time setting by joining it to the `rc.local` file: - `echo 'rsync - daemon' >> /etc/rc.d/rc.local` + `echo 'rsync --daemon' >> /etc/rc.d/rc.local` -Set rsync password +Setup an rsync username and password, making sure that it's owned only by root, so that local unauthorized users or exploits do not have access to it. If these permissions are not set correctly, rsync may not boot: echo 'Your Username: Your Password' > /etc/rsyncd.secrets chmod 600 /etc/rsyncd.secrets - Client synchronization: -Clients can use the following command to synchronize the files on the server: +Clients can synchronize server files with the following command: - rsync -avzP --delete --password-file=rsyncd.secrets username@192.168.145.5::www/var/rsync/backup + rsync -avzP --delete --password-file=rsyncd.secrets username@192.168.145.5::www /var/rsync/backup -This command, briefly explain a few points: +Let's break this down into a few key points: -1. `-avzP` is what the reader can use the `-help` Show -2. `-delete` for example A, deleted a file, the time synchronization, B will automatically delete the corresponding files -3. `-Password-file` client/etc/rsyncd.secrets set password, and server to `/etc/rsyncd.secrets` the password the same, so cron is running, you do not need the password -4. This command in the " User Name" for the service side of the `/etc/rsyncd.secrets` the user name -5. This command 192.168.145.5 as the IP address of the server -6. :: www, note the two: number, www as a server configuration file `/etc/rsyncd.conf` in [www], meaning that according to the service on the client `/etc/rsyncd.conf` to synchronize them [www] paragraph, a: number, when used according to the configuration file does not directly specify the directory synchronization. +1. `-avzP` are some common options. Use `rsync --help` to review what these do. +2. `--delete` deletes extraneous files on the receiving side. For example, if files are deleted on the sending side, the next time the two machines are synchronized, the receiving sides will automatically delete the corresponding files. +3. `--password-file` specifies a password file for accessing an rsync daemon. On the client side, this is typically the `client/etc/rsyncd.secrets` file, and on the server side, it's `/etc/rsyncd.secrets`. When using something like Cron to automate rsync, you won't need to manually enter a password. +4. `username` specifies the username to be used in conjunction with the server-side `/etc/rsyncd.secrets` password +5. `192.168.145.5` is the IP address of the server +6. `::www` (note the double colons), specifies contacting an rsync daemon directly via TCP for synchronizing the `www` module according to the server-side configurations located in `/etc/rsyncd.conf`. When only a single colon is used, the rsync daemon is not contacted directly; instead, a remote-shell program such as ssh is used as the transport . -In order to synchronize real-time, you can set the crontab, keeping rsync synchronization every minute, of course, users can also set the level of importance depending on the file type of synchronization frequency. +In order to periodically synchronize files, you can set up a crontab file that will run rsync commands as often as needed. Of course, users can vary the frequency of synchronization according to how critical it is to keep certain directories or files up to date. ## MySQL backup -MySQL database application is still the mainstream, the current MySQL backup in two ways: hot backup and cold backup, hot backup is currently mainly used master/slave mode (master/slave) mode is mainly used for database synchronization separate read and write, but also can be used for hot backup data ), on how to configure this information, we can find a lot. Cold backup data, then that is a certain delay, but you can guarantee that the time period before data integrity, such as may sometimes be caused by misuse of our loss of data, then the master/slave model is able to retrieve lost data, but through cold backup can partially restore the data. +MySQL databases are still the mainstream, go-to solution for most web applications. The two most common methods of backing up MySQL databases are hot backups and cold backups. Hot backups are usually used with systems set up in a master/slave configuration to backup live data (the master/slave synchronization mode is typically used for separating database read/write operations, but can also be used for backing up live data). There is a lot of information available online detailing the various ways one can implement this type of scheme. For cold backups, incoming data is not backed up in real-time as is the case with hot backups. Instead, data backups are performed periodically. This way, if the system fails, the integrity of data before a certain period of time can still be guaranteed. For instance, in cases where a system malfunction causes data to be lost and the master/slave model is unable to retrieve it, cold backups can be used for a partial restoration. -Cold backup shell script is generally used to achieve regular backup of the database, and then rsync synchronization through the above described non-local one server room. +A shell script is generally used to implement regular cold backups of databases, executing synchronization tasks using rsync in a non-local mode. -The following is a scheduled backup MySQL backup script, we use the mysqldump program, this command can be exported to a database file. + +The following is an example of a backup script that performs scheduled backups for a MySQL database. We use the `mysqldump` program which allows us to export the database to a file. #!/bin/bash - # The following configuration information, modify their own + # Configuration information; modify it as needed mysql_user="USER" #MySQL backup user mysql_password="PASSWORD" # MySQL backup user's password mysql_host="localhost" mysql_port="3306" - mysql_charset="utf8" # MySQL coding - backup_db_arr=("db1" "db2") # To back up the database name, separated by spaces separated by a plurality of such("db1" "db2" "db3") - backup_location=/var/www/mysql # backup data storage location, please do not end with a "/", this can keep the default, the program will automatically create a folder - expire_backup_delete="ON" # delete outdated backups is turned OFF to ON ON to OFF - expire_days=3 # default expiration time for the three days the number of days, this is only valid when the expire_backup_delete open + mysql_charset="utf8" # MySQL encoding + backup_db_arr=("db1" "db2") # Name of the database to be backed up, separating multiple databases wih spaces ("DB1", "DB2" db3 ") + backup_location=/var/www/mysql # Backup data storage location; please do not end with a "/" and leave it at its default, for the program to automatically create a folder + expire_backup_delete="ON" # Whether to delete outdated backups or not + expire_days=3 # Set the expiration time of backups, in days (defaults to three days); this is only valid when the `expire_backup_delete` option is "ON" - # We do not need to modify the following start - backup_time=`date +%Y%m%d%H%M` # define detailed time backup - backup_Ymd=`date +%Y-%m-%d` # define the backup directory date time + # We do not need to modify the following initial settings below + backup_time=`date +%Y%m%d%H%M` # Define the backup time format + backup_Ymd=`date +%Y-%m-%d` # Define the backup directory date time backup_3ago=`date-d '3 days ago '+%Y-%m-%d` # 3 days before the date - backup_dir=$backup_location/$backup_Ymd # full path to the backup folder - welcome_msg="Welcome to use MySQL backup tools!" # greeting + backup_dir=$backup_location/$backup_Ymd # Full path to the backup folder + welcome_msg="Welcome to use MySQL backup tools!" # Greeting - # Determine whether to start MYSQL, mysql does not start the backup exit + # Determine whether to MySQL is running; if not, then abort the backup mysql_ps=`ps-ef | grep mysql | wc-l` mysql_listen=`netstat-an | grep LISTEN | grep $mysql_port | wc-l` if [[$mysql_ps==0]-o [$mysql_listen==0]]; then - echo "ERROR: MySQL is not running! backup stop!" + echo "ERROR: MySQL is not running! backup aborted!" exit else echo $welcome_msg fi - # Connect to mysql database, can not connect to the backup exit + # Connect to the mysql database; if a connection cannot be made, abort the backup mysql-h $mysql_host-P $mysql_port-u $mysql_user-p $mysql_password << end use mysql; select host, user from user where user='root' and host='localhost'; @@ -109,11 +114,11 @@ The following is a scheduled backup MySQL backup script, we use the mysqldump pr flag=`echo $?` if [$flag!="0"]; then - echo "ERROR: Can't connect mysql server! backup stop!" + echo "ERROR: Can't connect mysql server! backup aborted!" exit else echo "MySQL connect ok! Please wait......" - # Judgment does not define the backup database, if you define a backup is started, otherwise exit the backup + # Determine whether a backup database is defined or not. If so, begin the backup; if not, then abort if ["$backup_db_arr"!=""]; then # dbnames=$(cut-d ','-f1-5 $backup_database) # echo "arr is(${backup_db_arr [@]})" @@ -124,59 +129,62 @@ The following is a scheduled backup MySQL backup script, we use the mysqldump pr `mysqldump -h $mysql_host -P $mysql_port -u $mysql_user -p $mysql_password $dbname - default-character-set=$mysql_charset | gzip> $backup_dir/$dbname -$backup_time.sql.gz` flag=`echo $?` if [$flag=="0"]; then - echo "database $dbname success backup to $backup_dir/$dbname-$backup_time.sql.gz" + echo "database $dbname successfully backed up to $backup_dir/$dbname-$backup_time.sql.gz" else - echo "database $dbname backup fail!" + echo "database $dbname backup has failed!" fi done else - echo "ERROR: No database to backup! backup stop" + echo "ERROR: No database to backup! backup aborted!" exit fi - # If you open the delete expired backup, delete operation + # If deleting expired backups is enabled, delete all expired backups if ["$expire_backup_delete"=="ON" -a "$backup_location"!=""]; then # `find $backup_location/-type d -o -type f -ctime + $expire_days-exec rm -rf {} \;` `find $backup_location/ -type d -mtime + $expire_days | xargs rm -rf` echo "Expired backup data delete complete!" fi - echo "All database backup success! Thank you!" + echo "All databases have been successfully backed up! Thank you!" exit fi -Modify shell script attributes: +Modify the properties of the shell script like so: chmod 600 /root/mysql_backup.sh chmod +x /root/mysql_backup.sh -Set attributes, add the command crontab, we set up regular automatic backups every day 00:00, then the backup script directory/var/www/mysql directory is set to rsync synchronization. +Then add the crontab command: 00 00 *** /root/mysql_backup.sh +This sets up regular backups of your databases to the `/var/www/mysql` directory every day at 00:00, which can then be synchronized using rsync. + ## MySQL Recovery -Earlier MySQL backup into hot backup and cold backup, hot backup main purpose is to be able to recover in real time, such as an application server hard disk failure occurred, then we can modify the database configuration file read and write into slave so that you can minimize the time interrupt service. +We've just described some commonly used backup techniques for MySQL, namely hot backups and cold backups. To recap, the main goal of a hot backup is to be able to recover data in real-time after an application has failed in some way, such as in the case of a server hard-disk malfunction. We learned that this type of scheme can be implemented by modifying database configuration files so that databases are replicated onto a slave, minimizing interruption to services. But sometimes we need to perform a cold backup of the SQL data recovery, as with database backup, you can import through the command: +Hot backups are, however, sometimes inadequate. There are certain situations where cold backups are required to perform data recovery, even if it's only a partial one. When you have a cold backup of your database, you can use the following `MySQL` command to import it: mysql -u username -p databse < backup.sql -You can see, export and import database data is fairly simple, but if you also need to manage permissions, or some other character set, it may be a little more complicated, but these can all be done through a number of commands. +As you can see, importing and exporting database is a fairly simple matter. If you need to manage administrative privileges or deal with different character sets, this process may become a little more complicated, though there are a number of commands which will help you to do this. ## Redis backup -Redis is our most used NoSQL, its backup is also divided into two kinds: hot backup and cold backup, Redis also supports master/slave mode, so our hot backup can be achieved in this way, we can refer to the corresponding configuration the official document profiles, quite simple. Here we introduce cold backup mode: Redis will actually timed inside the memory cache data saved to the database file inside, we just backed up the corresponding file can be, is to use rsync backup to a previously described non-local machine room can be achieved. +Redis is one of the most popular NoSQL databases, and both hot and cold backup techniques can also be used in systems which use it. Like MySQL, Redis also supports master/slave mode, which is ideal for implementing hot backups (refer to Redis' official documentation to learn learn how to configure this; the process is very straightforward). As for cold backups, Redis routinely saves cached data in memory to the database file on-disk. We can simply use the rsync backup method described above to synchronize it with a non-local machine. ## Redis recovery -Redis Recovery divided into hot and cold backup recovery backup and recovery, hot backup and recovery purposes and methods of recovery with MySQL, as long as the modified application of the corresponding database connection. +Similarly, Redis recovery can be divided into hot and cold backup recovery. The methods and objectives of recovering data from a hot backup of a Redis database are the same as those mentioned above for MySQL, as long as the Redis application is using the appropriate database connection. -But sometimes we need to cold backup to recover data, Redis cold backup and recovery is actually just put the saved database file copy to Redis working directory, and then start Redis on it, Redis at boot time will be automatically loaded into the database file memory, the start speed of the database to determine the size of the file. +A Redis cold backup recovery simply involves copying backed-up database files into the working directory, then starting Redis on it. The database files are automatically loaded into memory at boot time; the speed with which Redis boots will depend on the size of the database files. ## Summary -This section describes the application of part of our backup and recovery, that is, how to do disaster recovery, including file backup, database backup. Also introduced different systems using rsync file synchronization, MySQL database and Redis database backup and recovery, hope that through the introduction of this section, you can give as a developer of products for online disaster recovery program provides a reference solution. +In this section, we looked at some techniques for backing up data as well as recovering from disasters which may occur after deploying our applications. We also introduced rsync, a tool which can be used to synchronize files on different systems. Using rsync, we can easily perform backup and restoration procedures for both MySQL and Redis databases, among others. We hope that by being introduced to some of these concepts, you will be able to develop disaster recovery procedures to better protect the data in your web applications. ## Links diff --git a/en/12.5.md b/en/12.5.md index 99065067..fa357814 100644 --- a/en/12.5.md +++ b/en/12.5.md @@ -1,20 +1,20 @@ # 12.5 Summary -This chapter discusses how to deploy and maintain Web applications we develop some related topics. The content is very important to be able to create a minimum maintenance based applications running smoothly, we must consider these issues. +In this chapter, we discussed how to deploy and maintain our Go web applications. We also looked at some closely related topics which can help us to keep them running smoothly, with minimal maintenance. -Specifically, the discussion in this chapter include: +Specifically, we looked at: -- Create a robust logging system that can record an error in case of problems and notify the system administrator -- Handle runtime errors that may occur, including logging, and how to display to the user-friendly system there is a problem -- Handling 404 errors, telling the user can not find the requested page -- Deploy applications to a production environment (including how to deploy updates) +- Creating a robust logging system capable of recording errors, and notifying system administrators +- Handling runtime errors that may occur, including logging them, and how to relay this information in a user-friendly manner that there is a problem +- Handling 404 errors and notifying users that the requested page cannot be found +- Deploying applications to a production environment (including how to deploy updates) - How to deploy highly available applications -- Backup and restore files and databases +- Backing up and restoring files and databases -After reading this chapter, for the development of a Web application from scratch, those issues need to be considered, you should already have a comprehensive understanding. This chapter will help you in the actual environment management in the preceding chapter describes the development of the code. +After reading the contents of this chapter, those thinking about developing a web application from scratch should already have the full picture on how to do so; this chapter provided an introduction on how to manage deployment environments, while previous chapters have focused on the development of code. ## Links - [Directory](preface.md) - Previous section: [Backup and recovery](12.4.md) -- Next chapter: [Build a web framework](13.0.md) +- Next chapter: [Building a web framework](13.0.md)