Browse Source

update original

pull/709/head
funkill2 4 years ago
parent
commit
7512f68e00
  1. 1204
      rustbook-en/nostarch/chapter16.md
  2. 2
      rustbook-en/src/ch09-02-recoverable-errors-with-result.md
  3. 54
      rustbook-en/src/ch16-01-threads.md
  4. 10
      rustbook-en/src/ch16-03-shared-state.md
  5. 4
      rustbook-en/src/ch20-02-multithreaded.md
  6. 3
      rustbook-en/src/title-page.md

1204
rustbook-en/nostarch/chapter16.md

File diff suppressed because it is too large Load Diff

2
rustbook-en/src/ch09-02-recoverable-errors-with-result.md

@ -114,7 +114,7 @@ However, we want to take different actions for different failure reasons: if @@ -114,7 +114,7 @@ However, we want to take different actions for different failure reasons: if
and return the handle to the new file. If `File::open` failed for any other
reason—for example, because we didn’t have permission to open the file—we still
want the code to `panic!` in the same way as it did in Listing 9-4. For this we
add an inner `match` expression, shown in Listing 8-5.
add an inner `match` expression, shown in Listing 9-5.
<span class="filename">Filename: src/main.rs</span>

54
rustbook-en/src/ch16-01-threads.md

@ -26,40 +26,9 @@ thread. @@ -26,40 +26,9 @@ thread.
Programming languages implement threads in a few different ways. Many operating
systems provide an API for creating new threads. This model where a language
calls the operating system APIs to create threads is sometimes called *1:1*,
meaning one operating system thread per one language thread.
Many programming languages provide their own special implementation of threads.
Programming language-provided threads are known as *green* threads, and
languages that use these green threads will execute them in the context of a
different number of operating system threads. For this reason, the
green-threaded model is called the *M:N* model: there are `M` green threads per
`N` operating system threads, where `M` and `N` are not necessarily the same
number.
Each model has its own advantages and trade-offs, and the trade-off most
important to Rust is runtime support. *Runtime* is a confusing term and can
have different meanings in different contexts.
In this context, by *runtime* we mean code that is included by the language in
every binary. This code can be large or small depending on the language, but
every non-assembly language will have some amount of runtime code. For that
reason, colloquially when people say a language has “no runtime,” they often
mean “small runtime.” Smaller runtimes have fewer features but have the
advantage of resulting in smaller binaries, which make it easier to combine the
language with other languages in more contexts. Although many languages are
okay with increasing the runtime size in exchange for more features, Rust needs
to have nearly no runtime and cannot compromise on being able to call into C to
maintain performance.
The green-threading M:N model requires a larger language runtime to manage
threads. As such, the Rust standard library only provides an implementation of
1:1 threading. Because Rust is such a low-level language, there are crates that
implement M:N threading if you would rather trade overhead for aspects such as
more control over which threads run when and lower costs of context switching,
for example.
Now that we’ve defined threads in Rust, let’s explore how to use the
thread-related API provided by the standard library.
meaning one operating system thread per one language thread. The Rust standard
library only provides an implementation of 1:1 threading; there are crates that
implement other models of threading that make different tradeoffs.
### Creating a New Thread with `spawn`
@ -200,13 +169,12 @@ threads run at the same time. @@ -200,13 +169,12 @@ threads run at the same time.
### Using `move` Closures with Threads
The `move` closure is often used alongside `thread::spawn` because it allows
you to use data from one thread in another thread.
In Chapter 13, we mentioned we can use the `move` keyword before the parameter
list of a closure to force the closure to take ownership of the values it uses
in the environment. This technique is especially useful when creating new
threads in order to transfer ownership of values from one thread to another.
The `move` keyword is often used with closures passed to `thread::spawn`
because the closure will then take ownership of the values it uses from the
environment, thus transferring ownership of those values from one thread to
another. In the [“Capturing the Environment with Closures”][capture]<!-- ignore
--> section of Chapter 13, we discussed `move` in the context of closures. Now,
we’ll concentrate more on the interaction between `move` and `thread::spawn`
Notice in Listing 16-1 that the closure we pass to `thread::spawn` takes no
arguments: we’re not using any data from the main thread in the spawned
@ -268,7 +236,7 @@ after automatic regeneration, look at listings/ch16-fearless-concurrency/listing @@ -268,7 +236,7 @@ after automatic regeneration, look at listings/ch16-fearless-concurrency/listing
help: to force the closure to take ownership of `v` (and any other referenced variables), use the `move` keyword
|
6 | let handle = thread::spawn(move || {
| ^^^^^^^
| ++++
```
By adding the `move` keyword before the closure, we force the closure to take
@ -308,3 +276,5 @@ ownership rules. @@ -308,3 +276,5 @@ ownership rules.
With a basic understanding of threads and the thread API, let’s look at what we
can *do* with threads.
[capture]: ch13-01-closures.html#capturing-the-environment-with-closures

10
rustbook-en/src/ch16-03-shared-state.md

@ -174,11 +174,9 @@ Fortunately, `Arc<T>` *is* a type like `Rc<T>` that is safe to use in @@ -174,11 +174,9 @@ Fortunately, `Arc<T>` *is* a type like `Rc<T>` that is safe to use in
concurrent situations. The *a* stands for *atomic*, meaning it’s an *atomically
reference counted* type. Atomics are an additional kind of concurrency
primitive that we won’t cover in detail here: see the standard library
documentation for [`std::sync::atomic`] for more details. At this point, you just
need to know that atomics work like primitive types but are safe to share
across threads.
[`std::sync::atomic`]: ../std/sync/atomic/index.html
documentation for [`std::sync::atomic`][atomic]<!-- ignore --> for more
details. At this point, you just need to know that atomics work like primitive
types but are safe to share across threads.
You might then wonder why all primitive types aren’t atomic and why standard
library types aren’t implemented to use `Arc<T>` by default. The reason is that
@ -239,3 +237,5 @@ useful information. @@ -239,3 +237,5 @@ useful information.
We’ll round out this chapter by talking about the `Send` and `Sync` traits and
how we can use them with custom types.
[atomic]: ../std/sync/atomic/index.html

4
rustbook-en/src/ch20-02-multithreaded.md

@ -585,8 +585,8 @@ The call to `recv` blocks, so if there is no job yet, the current thread will @@ -585,8 +585,8 @@ The call to `recv` blocks, so if there is no job yet, the current thread will
wait until a job becomes available. The `Mutex<T>` ensures that only one
`Worker` thread at a time is trying to request a job.
With the implementation of this trick, our thread pool is in a working state!
Give it a `cargo run` and make some requests:
Our thread pool is now in a working state! Give it a `cargo run` and make some
requests:
<!-- manual-regeneration
cd listings/ch20-web-server/listing-20-20

3
rustbook-en/src/title-page.md

@ -11,9 +11,12 @@ The HTML format is available online at @@ -11,9 +11,12 @@ The HTML format is available online at
and offline with installations of Rust made with `rustup`; run `rustup docs
--book` to open.
Several community [translations] are also available.
This text is available in [paperback and ebook format from No Starch
Press][nsprust].
[install]: ch01-01-installation.html
[editions]: appendix-05-editions.html
[nsprust]: https://nostarch.com/rust
[translations]: appendix-06-translation.html

Loading…
Cancel
Save