Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Учебник по НТП

.pdf
Скачиваний:
24
Добавлен:
01.03.2016
Размер:
11.04 Mб
Скачать

VI. Enhancing Skills In Russian-English Interpretation

Render orally the following text:

Ускорители заряженных частиц

Ускорители заряженных частиц (УЗЧ) предназначены для получения заряженных частиц (электронов, протонов, атомных ядер, ионов) больших энергий. Ускорение производится с помощью электрического поля, способного изменять энергию частиц, обладающих электрическим зарядом. Магнитное поле может лишь изменить направление движения заряженных частиц, не меняя величины их скорости, поэтому в ускорителях оно применяется для управления движением частиц (формой траектории). Обычно ускоряющее электрическое поле создаётся внешними устройствами (генераторами). Толчком к развитию УЗЧ послужили исследования строения атомного ядра, требовавшие потоков заряженных частиц высокой энергии. Применявшиеся вначале естественные источники заряженных частиц, радиоактивные элементы, были ограничены как по интенсивности, так и по энергии испускаемых частиц. В начальный период (1919–1932) развитие ускорителей шло по пути получения высоких напряжений и их использования для непосредственного ускорения заряженных частиц. В 1931 американским физиком Р. Ван-де-Граафом был построен электростатический генератор, а в 1932 английские физики Дж. Кокрофт и Э. Уолтон из лаборатории Резерфорда разработали каскадный генератор. Эти установки позволили получить потоки ускоренных частиц с энергией порядка миллиона электрон-вольт (Мэв). В 1932 впервые была осуществлена ядерная реакция, возбуждаемая искусственно ускоренными частицами,—расщепление ядра лития протонами.

13

Разработка ускорителей современного типа началась с 1944, когда советский физик В. И. Векслер и независимо от него американский физик Э. М. Макмиллан открыли механизм автофазировки, действующий в резонансных ускорителях и позволяющий существенно повысить энергию ускоренных частиц. На основе этого принципа были предложены новые типы резонансных ускорителей—синхротрон, фазотрон, синхрофазотрон, микротрон.

В1957 в СССР (г. Дубна) был запущен самый крупный для того времени синхрофазотрон на энергию 10 Гэв. Через несколько лет в Швейцарии и США вступили в строй синхрофазотроны на 25–30 Гэв,

ав 1967 в СССР под Серпуховом—синхрофазотрон на 76 Гэв, который в течение многих лет был крупнейшим в мире.

Одним из наиболее распространенных современных УЗЧ является синхрофазотрон (протонный синхротрон) - циклический резонансный ускоритель протонов с изменяющимся во времени магнитным полем

иизменяющейся частотой ускоряющего электрического поля. Из всех современных УЗЧ синхрофазотроны позволяют получать самые высокие энергии частиц.

Всинхрофазотроне магнитная система состоит из нескольких магнитных секторов, разделённых прямолинейными промежутками. В промежутках располагаются системы ввода, ускоряющие устройства, системы наблюдения за пучком частиц, вакуумные насосы и др. Вводное устройство служит для перевода частиц из инжектора в вакуумную камеру основного ускорителя. Обычно ввод производится с помощью импульсного отклоняющего устройства, электрическое или магнитное поле которого «заворачивает» впускаемые частицы, направляя их по орбите. Вакуумная камера представляет собой сплошную замкнутую трубу, охватывающую область вокруг равновесной орбиты частиц. С помощью непрерывно действующих откачивающих насосов в камере создаётся достаточно низкое (~10-6 мм рт. ст.) давление, чтобы рассеяние ускоряемых частиц не приводило к расширению пучка и потере частиц.

Закруглённые участки камеры расположены в зазорах между полюсами электромагнитов, создающих внутри камеры магнитное поле, необходимое для управления движением частиц по замкнутой орбите (заворачивания частиц по орбите). В одном или нескольких зазорах расположены ускоряющие устройства, создающие переменное электрическое поле. Частота поля изменяется в строгом соответствии с изменением магнитного поля. Это достигается обычно с помощью системы автоматического слежения за частотой по данным о положении частиц: ошибка в частоте приводит к отходу частиц от равновесного положения, чувствительные датчики регистрируют этот отход, их сигнал усиливается и используется для введения необходимых поправок.

14

Сегодня ускорители заряженных частиц не только являются одними из основных инструментов современной физики, но и применяются в других областях: химии, биофизике, геофизике. Расширяется значение УЗЧ различных диапазонов энергий в металлургии–для выявления дефектов деталей и конструкций (дефектоскопия), в деревообделочной промышленности–для быстрой высококачественной обработки изделий, в пищевой промышленности–для стерилизации продуктов, в медицине—для лучевой терапии, «бескровной хирургии» и в ряде других отраслей.

VII. Solving Translation Problems

A term is a word or a group of words used to designate a particular idea or notion within a particular field - scientific, medical, technical, etc. Such terms as atomic mass, half-life, gravity, have direct relevance to the terminological system of the corresponding science, though in nonprofessional spheres these ideas would require verbose descriptions. Some terms can become widely understood and eventually be adopted by the common people, for example, the terminological character of such words as radio or firewall, is no longer evident.

Read the text below, copy out the underlined words and make a list of three groups: (1) terms; (2) former terms that now belong to common literary and neutral vocabulary; (3) words that are not and have never been terms. Translate your list and the entire text into Russian.

Cryogenics

Cryogenics is a branch of physics concerned with the study of very low temperatures (from about -280 Fahrenheit down to absolute zero).

Besides the familiar temperature scales of Fahrenheit and Celsius (Centigrade), cryogenicists also use the Kelvin and Rankine temperature scales in which zero is absolute zero, the lowest possible temperature. Absolute zero is at -273.15 Celsius, or -459.67 Fahrenheit. Here’s one example of temperature comparisons: 68 Fahrenheit is the same as 20 Celsius, 293.15 Kelvin, and 527.67 Rankine. For other comparisons, see the table below.

Fahrenheit

Celsius

Kelvin

Comments

212

100

373,15

water boils

32

0

273,15

water freezes

-40

-40

233,15

Fahrenheit equals Celsius

15

Fahrenheit

Celsius

Kelvin

Comments

-320,42

-195,79

77,36

liquid nitrogen boils

-452,11

-268,95

4,2

liquid helium boils

-459,67

-273,15

0

absolute zero

If a gas is cooled sufficiently, it is liquefied, thereby greatly reducing the volume. This makes storage easier and more economical. Liquefied gases, such as liquid nitrogen, oxygen and helium, have several properties including phase changes (gas to liquid, liquid to gas, and visa versa), and thermal expansion (for example, 1 liter of liquid nitrogen will occupy 645.3 liters as a gas once it has all vaporized). Nitrogen gas, when cooled, condenses at -195.8 Celsius (77.36 Kelvin) and freezes at - 209.86 Celsius (63.17 Kelvin.) Or, to reverse the order, solid nitrogen melts to form liquid nitrogen at 63.17 Kelvin, which boils at 77.36 Kelvin. Oxygen liquefies at -184 Celsius, and is bluish in color. This gas in its liquid form is strongly magnetic. Liquid helium boils at -268.93 Centigrade (4.2 Kelvin). Helium does not freeze at atmospheric pressure, only at pressures above 20 times atmospheric will solid helium form.

In the liquefaction process atmospheric air is passed through a dust precipitator and pre-cooled using conventional refrigeration techniques to remove all traces of dirt and water. It is than compressed inside large turbo pumps to about 100 atmospheres. During the compression cycle the air heats up dramatically and has to be cooled constantly, so the compression cycle is actually done in stages, and between each stage there is an intercooler, which cools it down before it is compressed any further. Once the air has reached 100 atmospheres and has been cooled to room temperature, it is allowed to expand rapidly through a nozzle, into an insulated chamber. Just as air heats

up during the compression cycle, it cools down during decompression, since the energy for the rapid escape of gas has to come from the molecules themselves. By running several cycles, the temperature of the chamber reaches low enough temperatures that the air entering it starts to liquefy.

Liquid gases are removed from the chamber by fractional distillation and are stored inside well-insulated Dewar Flasks. A Dewar flask is a double-walled vessel with a high vacuum between the walls which reduces any heat to be transferred by convection currents. Both walls are silver coated so as to prevent heat from being transmitted by radiation. Dewar flasks are named after

16

their inventor, British physicist Sir James Dewar, the man who first liquefied hydrogen. Dewars are generally about six feet tall and three feet in diameter and are familiar to most of us under the brand name “Thermos”.

The field of cryogenics advanced when, during World War II, scientists found that metals frozen to low temperatures showed more resistance to wear. Based on this theory of cryogenic hardening and with a background in the heat treating industry, Ed Busch founded a company in Detroit, called CryoTech during 1966 and experimented with the possibility of increasing the life of metal tools to anywhere between 200%-400% of the original life expectancy using cryogenic tempering instead of heat treating. The theory was based on how heat treating metal works (the temperatures are lowered to room temperature from a high degree causing certain strength increases in the molecular structure to occur) and supposed that continuing the descent would allow for further strength increases.

Using liquid nitrogen, CryoTech formulated the first early version of the cryogenic processor designed to reach ultra-low temperatures (usually around -300°F / -150°C) at a slow rate in order to prevent thermal shock to the components being treated. At present, liquefied gases are used in many cryogenic applications. Liquid oxygen, “lox” for short, is used as an oxidizer for rocket fuel formulations (such as those on the NASA’s workhorse space shuttle’s main engine). Being 600 times as dense as the oxygen in air, it allows the violent combustion of large amounts of fuel that is needed to power a rocket into orbit. Liquid nitrogen and helium are used as coolants.

VIII. Mastering English Grammar

Translate the sentences into Russian paying special attention to the equivalent-lacking grammatical structures:

1. While Lippmann improved photography from black and white to color, Gabor’s holography extended photography from flat pictures to

athree-dimensional image space.

2.Interestingly, the physics behind both inventions can be understood on the same principle, namely using the wave nature of light, which involves encoding the image field by interference, recording the structure in a photographic plate, and then reading out the image field again by sending light and getting it modulated in this structure.

3.Compared to water or sound, the wave nature of light is far more difficult to observe due to the small wavelengths (e.g. 0.4– 0.7 mm, i.e. 0.0004–0.0007 mm, for visible light) and worse, the frequencies of the wave vibrations are 750 to 400 THz (1 terahertz is a million times million periods a second).

17

4.This is what in acoustics is taught as a “node” and a “bulge” of the sound vibration, respectively.

5.For a stable pattern of interference fringes, the waves have to be of the same wavelength – the light is monochromatic – and they have to have the same phase relation, i.e. to be of the same origin – the light has coherence.

6.When, after development, white light is shone on the plate in reflection, it will be scattered at these silver grains in all directions.

7.If Gabor wants to reconstruct wavefronts in three-dimensional space, he needs a field of view, and we imagine that he instead has to abandon wavelength range.

8.The light is distributed into several diffracted fields, of which one is called the reconstructed field that propagates through the plate as a replica of the object field which previously hit the plate.

9.Their “Autochrome” method prevailed in the 1930s, becoming replaced by the present color photography technology, which generates the dye stuff in three film layers during development.

10.However, Lippmann photography is still held in high regard in science and teaching; there is no other way to image spectra correctly.

IX. Fostering Critical Thinking Skills

Read the text. Find additional material to expand the topic and write a commented essay in Russian on Holography or Color Photography:

Holography And Color Photography

Among the Nobel Prizes in Physics, two scientists have been honored for their remarkable methods to record and present images: Gabriel Lippmann, awarded in 1908 “for his method of reproducing colours photographically based on the phenomenon of interference,” and Dennis Gabor, awarded in 1971, “for his invention and development of the holographic method.” While Lippmann improved photography from black and white to color, Gabor’s holography extended photography from flat pictures to a three-dimensional image space. Procedures to offer to each eye of the viewer its own parallax–stereoscopy–are as historical as photography itself. But Gabor's idea of a "hologram" was to store all the information in all image space and not just in one slightly different

second photograph.

Interestingly, the physics behind both inventions can be understood on the same principle, namely using the wave nature of light, which involves encoding the image field by interference, recording the structure in a photographic plate, and then reading out the image field again by sending light and getting it modulated in this structure. Compared to

18

water or sound, the wave nature of light is far more difficult to observe due to the small wavelengths (e.g. 0.4–0.7 µm, i.e. 0.0004–0.0007 mm, for visible light) and worse, the frequencies of the wave vibrations are 750 to 400 THz (1 terahertz is a million times million periods a second). The frequency of light is fundamental, there is no mechanism to read out the motion of light waves. However, a wave motion can be probed by the interaction with a very similar one, the effect called interference, to the degree of a standstill in a “standing wave”.

A standing wave arises from the interference of two waves of exactly the same frequency but opposite phase of the vibration amplitude. For light, the stop is a mirror where the impinging wave is reflected. At the metallic mirror, nature avoids absorption of the wave by switching the phase at the same instant as the propagation direction turns over. At the mirror, the resulting field is always zero; at a quarter of a wavelength away from the mirror, the sum of the two fields will periodically change at values of up to (+) and (-) 2 x the amplitude. This is what in acoustics is taught as a “node” and a “bulge” of the sound vibration, respectively. In optics, the interference will be observed as dark and bright fringes and can be recorded in photographic film or any other light detector.

For a stable pattern of interference fringes, the waves have to be of the same wavelength – the light is monochromatic–and they have to have the same phase relation, i.e. to be of the same origin–the light has coherence. This condition is achieved when waves are split from the same source and the delay between the original and mirrored wave is only a few wavelengths apart. Standing waves in a thin oil film on wet asphalt and in the emulsion used in Lippmann's photography fulfil this condition. However, for three-dimensional holography, Gabor had to generate the fringes by letting the object field interfere with an external reference field. A light source of an adequate degree of monochromaticity and coherence became first available through the laser.

19

The primer on wave optics and interference showed that light of different wavelengths will generate standing waves at corresponding period lengths. Lippmann started out with a pattern of standing waves, where a wavefield meets itself again after it is reflected in a mirror. He projected an optical image as usual onto a photographic plate, but through its glass plate with the almost transparent emulsion of extremely fine grains on the backside. Then he added the interference effect by placing a mercury mirror in contact with the emulsion. The image went through the emulsion, hit the mirror, and then returned the light back into the emulsion. The image projected onto the plate did not plainly expose the emulsion according to the local distribution of irradiance. Rather, the exposure was encoded when the wave field returned within the emulsion and created standing waves, whose nodes gave little exposure, whereas the bulges gave maximum effect. Hence, after development, the photographic layer contained some twenty or more lamellae of silver grains with different periods for different colors in the image.

When, after development, white light is shone on the plate in reflection, it will be scattered at these silver grains in all directions. Into the direction from which the standing wave pattern had been generated, the scattered light fields having the same wavelength as the period of the lamellae will be in phase, interfere constructively, and together create a strong color image. This form of imaging builds on a symmetric process of interference and diffraction: first by encoding the image into an interference pattern, and then reconstructing the image by diffraction at this pattern.

The same two-step principle holds for Gabor’s idea of wave front reconstruction. If Gabor wants to reconstruct wavefronts in three-dimen- sional space, he needs a field of view, and we imagine that he instead has to abandon wavelength range. The process has to be done in monochromatic light. The reference for interference is no longer the reflection of the image field itself (in holography usually called object field), rather it has to be provided by a separate reference field. The angle between the reference field and any point from the object field determines periodicity and orientation of the resulting, much more complicated interference

structure, which he called a "hologram." This also means that in order to obtain decent interference, the coherence length has to be larger than the path difference between any point at the object field and the reference field.

Light comes from a laser at the lower left, then from the mirror and the lens at the upper left, it illuminates the object, a loudspeaker in the center which, in its

20

turn, spreads its light to the photographic plate facing it. Since there is no lens at which to project an image, the irradiance from the object to the plate is quite uniform. However, a portion of the laser beam has been split off as a reference field at the partly transparent mirror, and it now meets the object field at the photographic plate after about the same travel time.

The two fields then interfere and expose together an intricate standing wave pattern in the emulsion. After development, the reference field alone shines on the plate and becomes modulated in the structure, i.e. the hologram. The light is distributed into several diffracted fields, of which one is called the reconstructed field that propagates through the plate as a replica of the object field which previously hit the plate. In this way, the hologram acts like a window with a memory.

Lippmann photography could not evade the handicap of high-resolu- tion plates requiring exposure times from minutes to hours. However, Lippmann’s demonstration and the feasibility of taking photographs in natural colors stimulated the desire for such technologies. The Lumiere brothers developed, in parallel with the work they did for Lippmann, a process of their own, based on transparent filters in three colors (in structure similar to today’s TV screens). Their “Autochrome” method prevailed in the 1930s, becoming replaced by the present color photography technology, which generates the dye stuff in three film layers during development. However, Lippmann photography is still held in high regard in science and teaching; there is no other way to image spectra correctly.

Gabor’s wavefront reconstruction scheme was a new principle in optics and culminated in hologram interferometry, a standard measurement technology for deformation and vibration analysis. Holograms generated by computer can calibrate odd optical or mechanical surfaces to a wavefront just mathematically postulated, or produce optical components for, e.g. CD players, focusing screens or autofocus devices for cameras. Today, there is real commercial volume in holograms laminated on any credit card, IDdocument, banknote, and for brand merchandise verification.

21

X. Organizing Ideas

Concept maps are tools for organizing and representing knowledge. They harness the power of our vision to understand complex information "at-a-glance." It is easier for the brain to make meaning when incoming information is presented in visual formats. This is why a picture is worth a thousand words. Here are some advantages of concept maps:

They clearly define the central idea, by positioning it in the centre of the page, and indicate the relative importance of other ideas with lines radiating in all directions from the center.

They allow you to see contradictions, paradoxes, and gaps in the material more easily, and in this way provide a foundation for questioning, which in turn encourages discovery and creativity.

They allow you to see all your basic information on one page, which makes recall and review more efficient.

Below is a sample concept map on Color Vision:

22