重要提示:请勿将账号共享给其他人使用,违者账号将被封禁!
查看《购买须知》>>>
首页 > 公共课
网友您好,请在下方输入框内输入要搜索的题目:
搜题
拍照、语音搜题,请扫码下载APP
扫一扫 下载APP
题目内容 (请给出正确答案)
[主观题]

A computer file is a collection of ______ data, used to organize the storage and processing of data by computer.

A computer file is a collection of ______ data, used to organize the storage and processing of data by computer.

A. electrical

B. artificial

C. electronic

D. genuine

答案
查看答案
更多“ A computer file is a collection of ______ data, used to organize the storage and processing of data…”相关的问题

第1题

A computer virus plagues other computers by ______. A.misguiding the header information at the begi

A computer virus plagues other computers by ______.

A.misguiding the header information at the beginning of the file

B.preying on other computer files

C.attaching its infected code to the end of a host program

D.all of the above

点击查看答案

第2题

Computer Security 计算机安全 The techniques developed to protect single computers and network-link

Computer Security

计算机安全

The techniques developed to protect single computers and network-linked computer systems from accidental or intentional harm are called computer security. Such harm includes destruction of computer hardware and software, physical loss of data, and the deliberate invasion of databases by unauthorized individuals.

Data may be protected by such basic methods as locking up terminals and replicating data in other storage facilities. More sophisticated methods include limiting data access by requiring the user to have an encoded card or to supply an identification number or passworD. Such procedures can apply to the computer data system as a whole or may be pinpointed for particular information banks or programs. Data are frequently ranked in computer files according to degree of confidentiality.

Operating systems and programs may also incorporate built in safeguards, and data may be encoded in various ways to prevent unauthorized persons from interpreting or even copying the material. The encoding system most widely used in the United States is the Data Encryption Standard (DES), designed by IBM and approved for use by the National Institute of Standards and Technology in 1976. DES involves a number of basic encrypting procedures that are then repeated several times. Very large scale computer systems, for example, the U. S. military Advanced Research Project Agency Network (ARPANET), may be broken up into smaller subsystems for security purposes, but smaller systems in government and industry are more prone to system-wide invasions. At the level of personal computers, security possibilities are fairly minimal.

Most invasions of computer systems are for international or corporate spying or sabotage, but computer hackers[1]may take the penetration of protected databanks as a challenge, often with no object in mind other than accomplishing a technological feat. Of growing concern is the deliberate implantation in computer programs of worms or viruses[2]that, if undetected, may progressively destroy databases and other software. Such infected programs have appeared in the electronic bulletin boards available to computer users. Other viruses have been incorporated into computer software sold commercially. No real protection is available against such bugs except the vigilance of manufacturer and user.

Anti-Virus Programs to the Rescue

There is a wide range of virus protection products available to combat the 11,000 known viruses that currently plague personal computers. These products range in technology from virus scanners to terminate and stay resident monitors, to integrity checkers to a combination of the three. Each of these techniques has its associated strengths and weaknesses.[3]

The most fundamental question that must be asked when considering and evaluating automated anti-virus tools is "how well does the product protect against the growing virus threat?" When developing a security program, companies must think long term. Not only must you choose a form of protection that can detect and safely eliminate today's varieties, but you must consider tomorrow's gully wash as well.[4]The real challenge lies in securing against the 38,000 new species that are expected to appear within the next two years. The 11,000 known viruses that have been documented to date represent what is only the tip of the iceberg in terms of what tomorrow will bring.

Virus Protection Methods

Today there exists three broad based categories of anti-virus techniques: scanners, memory resident monitors (TSRs), and integrity checkers.

Virus Scanners

Virus scanners are programs designed to examine a computer's boot block, system memory, partition table, and executable files,[5]looking for specific code patterns that are typical to known virus strains. Generally, a virus scanner is able to identify a virus by name and indicate where on the hard drive or floppy drive the infection has occurreD. Virus scanners are also able to detect a known virus before it is executeD. Virus scanners do a good job of detecting known viruses. They are generally able to find a virus signature if it is present and will identify the infected file and the virus. Some are faster than others, which is an advantage when checking a hard disk with thousands of files. But virus scanners have several major weaknesses.

First and foremost, scanners are completely ineffective against any virus whose code pattern is not recognizeD. In other words, scanners cannot identify a virus if they don't have a signature for it. Also, many of today's viruses are designed specifically to thwart scanners. These so-called stealth viruses know the correct file size and date for a program (i. e. , what they were before the virus infected them). They will intercept operations that ask for that information and return the pre-infection values, not the actual ones during a disk reaD. Some viruses can mutate slightly so that the original signature will be rendered ineffective against the new strain and can even result in file damage if recovery is based off virus signature assumptions. A new wave in virus authorship is the creation of self mutating viruses. These viruses infect a file in a different way each time, so it cannot be identified by a simple pattern search, rendering virus scanners ineffective.

Secondly, virus scanners are quickly rendered obsolete and require frequent, costly and time-consuming updates—which may be available only after serious damage has been done. The burden of constantly updating virus scanners, even if provided free of charge, can be a huge burden. In a corporate environment, where thousands of personal computers must be protected, simply distributing scanner updates in a timely and efficient manner and making sure they are installed is an enormous task.

I ntegrity Checkers

This is a relatively new approach, compared to scanners and monitors. Integrity checkers incorporate the principle modification detection. This technique safeguards against both known and unknown viruses by making use of complex file signatures and the known state of the computer environment rather than looking for specific virus signatures.

Each file has a unique signature (which is like a fingerprint-a unique identifier for that particular file) in the form of a CRC or a checksum. Changes in any character within the file will probably change the file's checksum. For a virus to spread, it must get into system memory and change some file or executable code.

An integrity checker will fingerprint and register all program files and various system parameters, such as the boot block, partition table, and system memory, storing this information in an on-line database. By recalculating the files checksum and comparing it to the original, integrity checkers can detect file changes that are indicative of a virus infection.

Industry experts agree that integrity checking is currently the only way to contend with tomorrow's growing virus threat. Since this methodology is non-reliant on virus signatures, it offers protection against all potential viruses, today's and tomorrow's.

Additionally, stealth viruses have historically been able to bypass integrity checkers. The only way users can be certain that their computer is 100 percent clean is to boot the system from a clean, DOS based disk and check the integrity of the information stored on this disk with the current state of the hard drive. Called the "Golden Rule" in virus protection, most integrity checkers fail to follow this security principle.

System Administrator

System Administrator, in computer science, is the person responsible for administering Use of a multiuser computer system, communications system, or both. A system administrator performs such duties as assigning user accounts and passwords, establishing security access levels, and allocating storage space, as well as being responsible for other tasks such as watching for unauthorized access and preventing virus or Trojan Horse[6]programs from entering the system. A related term, sysop (system operator), generally applies to a person in charge of a bulletin board system, although the distinction is only that a system administrator is associated with large systems owned by businesses and corporations, whereas a sysop usually administers a smaller, often home- based, system.

Hacker

Hacker, in computer science, originally, is a computerphile, a person totally engrossed in computer programming and computer technology. In the 1980s, with the advent of personal computers and dial up[7]computer networks, hackers acquired a pejorative connotation, often referring to someone who secretively invades others computers, inspecting or tampering with the programs or data stored on them. (More accurately, though, such a person would be called a cracker.) Hacker also means someone who, beyond mere programming, likes to take apart operating systems and programs to see what makes them tick.

Notes

[1]computer hackers:电脑黑客,指非法侵入他人计算机进行浏览或篡改程序或计算机上所存数据的人。

[2]Of growing concern is the deliberate implantation in computer programs of worms or viruses.越来越令人担心的是蓄意地把蠕虫程序或病毒植入计算机程序。

[3]These products range in technology from virus scanners to terminate and stay resident monitors,to integrity checkers to a combination of the three.Each of these techniques has its associated strengths and weaknesses.这些防病毒的产品从技术上有病毒扫描到内存驻留监督程序,从完整性检查到三者的结合程序,每一种有其相关的优点和缺点。

[4]gully wash:gully冲沟,檐槽。此处字面意义是“冲水槽”,可翻译成“但必须从长计议”或“考虑到未来的问题”。

[5]to examine a computer's boot block,system memory,partition table,and executable files:检查计算机的引导块、系统内存、分区表和可执行文件。

[6]Trojan Horse:特洛伊木马,一种欺骗程序。在计算机安全学中,一种计算机程序,表面上或实际上有某种有用功能,而含有附加的(隐藏的)可能利用了调用进程的合法特许来危害系统安全的功能。

[7]dial up:拨号呼叫,访问计算机的一种方法。计算机通过调制解调器连接到电话线路上,拨号上网。

Choose the best answer for each of the following:

点击查看答案

第3题

从供选择的答案中选出应填入下面一段英文中______内的正确答案。 In data processing, using an office met

从供选择的答案中选出应填入下面一段英文中______内的正确答案。

In data processing, using an office metaphor, a file is a related collection of(1)records.For example, you might put the records you have on each of your customers in a(2)file. In turn, each record would(3)consist of fields for individual data items, such as customer name, customer, customer number, customer address, and so forth. By providing the same(4)information in the same fields in each record (so that all records are consistent), your file will be easily(5)accessible for analysis and manipulation by a computer(6)program. This use of the term has become somewhat less important with the advent of the(7)database and its emphasis on the table as a way of collecting record and field data. In mainframe systems, the term data set is generally(8)synonymous with file but implies a specific form of organization recognized by a particular access method. Depending on the(9)operatng system, files (and data sets) are contained within a catalog,(10)directory, or folder.

供选择的答案:

1.information 2.directory 3.database 4.consist 5.program

6.synonymous 7.operating 8.records 9.accessible 10.file

点击查看答案

第4题

从下面提供的答案中选出应填入下列英文语句中______内的正确答案。 An antivirus program (1) a virus

从下面提供的答案中选出应填入下列英文语句中______内的正确答案。

An antivirus program(1)a virus by searching code recognized as that of one of the thousands of viruses known to afflict computer systems. An antivirus program also can be used to create a checksum for(2)files on your disk, save the checksums in a special file, and then use the statistic to(3)whether the files are modified by some new virus. Terminate and stay resident (TSR) programs can check for unusual(4)to access vital disk areas and system files, and check files you copy into memory to be sure they are not(5).

点击查看答案

第5题

从供选择的答案中选出应填入下面一段英文中______内的正确答案。 File Transfer Protocol (FTP),a standard

从供选择的答案中选出应填入下面一段英文中______内的正确答案。

File Transfer Protocol (FTP),a standard Internet protocol, is the simplest way to(1)exchange files between computers on the Internet. Like the Hypertext Transfer Protocol (HTTP), which transfers displayable Web(2)pages and related files, and the Simple Mail Transfer Protocol (SMTP), which transfers e-mail, FTP is an(3)application protocol that uses the Internet's TCP/IP protocols. FTP is commonly used to transfer Web page files from their creator to the computer that acts as their(4)server for everyone on the Internet. It's also commonly used to(5)download programs and other files to your computer from other servers.

As a user, you can use FTP with a simple(6)command line interface (for example, from the Windows MS-DOS Prompt window) or with a commercial program that offers a graphical user(7)interface. Your Web browser can also make FTP requests to download programs you select from a Web page. Using FTP, you can also(8)update (delete, rename, move, and copy) files at a server. You need to logon to an FTP server. However, publicly available files are(9)easily accessed using anonymous FTP.

Basic FTP support is usually provided as part of a suite of programs that come with TCP/IP. However, any FTP(10)client program with a graphical user interface usually must be downloaded from the company that makes it.

供选择的答案:

1.download 2.exchange 3.update 4.interface 5.server

6.client 7.command 8.pages 9.easily 10.application

点击查看答案

第6题

Computer Viruses 计算机病毒 Introduction A computer virus is a piece of software programmed to pe

Computer Viruses

计算机病毒

Introduction

A computer virus is a piece of software programmed to perform one major task: to replicate. Viruses accomplish their reproductive task by preying on other computer files, requiring a host program[1]as a means of survival. Viruses gain control over their host in various ways, for example by attaching their infected code to the end of a host program and misguiding the header information at the beginning of the file so that it points toward itself rather than the legitimate program. Therefore, when an infected host program is run, the virus gets executed before the host. The host program can be almost anything: an application, part of the operating system's, part of the system boot code, or a device driver. The virus continues to spread, moving from file to file in this infectious manner.

In addition to its propagation mission, many viruses contain code whose purpose is to cause damage. In some viruses, this code is activated by a trigger mechanism.[2]A trigger condition may be linked to the number of times the host file is run, or it could be a response to a particular date, time or random number. In other cases, the damage could occur continuously or on a random basis. Of the 11,000 known viruses present today, more than 2,000 have been diagnosed as being data destructive.

Types of Viruses

Several types of viruses exist and are classified according to their propagation patterns.

1. Executable File Infectors

These viruses spread infection by attaching to an executable file, misdirecting the header information, and executing before the host file. It is very common for these viruses to load themselves into memory once their infected host file is launched. From there, they monitor access calls, infecting programs as executed.

2. Boot Sector Infectors

This type of virus overwrites the original boot sector, replacing this portion of code with itself, so it is the first to load and gain control upon system boot, even before DOS. In order for boot block viruses to replicate, it is usually necessary to boot the computer from an infected floppy disk. Upon system boot, the virus will jump from the infected floppy disk to the hard disks partition table.

3. Partition Table Infectors

These viruses attack the hard disk partition table by moving it to a different sector and replacing the original partition table with its own infectious code. These viruses will then spread from the partition table to the boot sector of floppy disks as floppies are accessed. 4. Memory Resident Infectors

Many viruses load themselves into memory while altering vital system services. For example, some viruses modify the operating system's Execute Program service in such a way that any executed program is immediately infected. Other viruses modify the operating system in order to camouflage their existence. These viruses are called Stealth Viruses.

Why Are Viruses Written?

Bulgaria is often referred to as the "Virus Factory" because the country accounts for the highest percentage of new virus creation. Several cultural factors attribute to this state. Primarily, the country offers no software copyright protection, so legitimate software programmers are not rewarded financially for their work. And there are no laws in place to prohibit the authorship of new viruses. In fact, virus source code is often posted on international bulletin boards for anyone to access. Certainly, this is not the case in the United States, so why do we maintain the second highest level of virus authorship? Today's viruses are being written to attack a specific person, company or program. There are countless stories of disgruntled employees who seek vengeance by writing viruses to attack their former employer's computer system.

How Are Viruses Transmitted?

Because a virus is nothing more than a piece of software, it can be acquired in the same way as legitimate programs. Viruses have reportedly been transmitted through shrink- wrapped retail software.[3]Unsuspecting sales representatives often act as carriers by demonstrating infected programs. Newly purchased computers, which had their hard disks formatted by service technicians, have been returned with viruses. These pests travel over phone lines through programs sent by modem. Bulletin boards do occasionally transmit viruses. The most common means of contracting a virus, however, is through the use ot floppy disks. Piracy of software, in particular, expedites viral spread, as do floppy disks traveling from one computer to another.

We Are All at Risk

All personal computer users are at risk for viral infection. Several events, trends and technological inroads have combined in the past few years to increase our vulnerability to infection. The proliferation of local area networks, the downloading of information from mainframes to desktop computers, our increased reliance on personal computers to store mission critical data, the arrival of electronic bulletin boards, the globalization of communications, the gained popularity of shareware, the growing use of remote communications, the increased sophistication of end users, the portability of data, the casual spread of software via piracy, and the staggering rate of new virus creation all contribute to increase our risk of virus infection.

A Special Threat to Networks

Viruses present a special threat to networks because of the inherent connectivity they provide and because of the potential for widespread data loss. Once a virus infects a single networked computer, the average time required for it to infect another workstation is anywhere from 10 to 20 minutes. With a propagation time of this magnitude, a virus can paralyze an entire network in several hours.

Virus Infection Symptoms

The most successful virus has no symptoms at all. Your computer may be infected, and you will notice no change in the normal behavior of your computer. The only way to be aware of such viruses is to use automated virus detection tools. Some less sophisticated viruses may exhibit "visible" symptoms such as:

1) Changes in program length

2) Changes in the data or time stamp

3) Longer program load times

4) Slower system operation

5) Unexplained disk activities

6) Unexplained reduction in memory or disk space

7) Bad sectors on your floppy

8) Disappearing programs

9) Unusual error messages

10) Unusual screen activity

11) Access lights turn on for non-referenced drive

12) Failed program execution

It is important to remember that some viruses may not exhibit any visible symptoms at all. Don't count on your intuition as your only tool for detecting viruses.

Anti-Virus Tools

In dealing with today's sophisticated viruses, intuition and strict employee policies are not enough. The more carefully engineered virus programs exhibit no visible symptoms at all until it is too late. Your computer may be infected with a virus without any noticeable alteration in functionality. Therefore, relying solely on visible side effects, such as slower system operation, longer program load time or unusual screen activity as a means of early detection, may not prove as reliable as it once did. You can no longer afford to count on your intuition as your only tool for detecting viruses. While information systems managers should establish employee guidelines and policies to lessen the potential for infection, strict rules alone will not insure complete protection. What about the shrink-wrapped software program purchased by your company that was later found to be infected by a virus? Or what about the hard drive that was sent out for repair by a service technician, only to[4]have it returned with a virus? The only way to prevent viruses from mysteriously entering your company is to reinforce the security programs already in place with automated virus detection tools.

Defending against Viruses

Following are some tips in helping to combat the growing threat of viral infection.

1) Use an automated virus detection tool, such as Fifth Generation Systems Untouchable virus protection software.

2) Regularly perform a backup of your data with a backup program, such as Fifth Generation Systems Fastback Plus.[5]

3) Prevent unauthorized access to your computer by using a security access program, such as Fifth Generation Systems Disklock.[6]

4) Use write-protected tabs on all program disks before installing any new software. If the software does not allow this, install it first, then apply the write-protected tabs.

5) Do not install new software unless you know it has come from a reliable source. For instance, service technicians and sales representatives are common carriers of viruses. Scan all demonstration or repair software before use.

6) Scan every floppy disk before use and check all files downloaded from a bulletin board or acquired from a modem.

7) Educate employees. As the adage goes, an ounce of prevention is worth a pound of cure.

8) Do not boot from any floppy disk[7], other than a clean, DOS based disk.

9) Avoid sharing software and machines.

10) Store executable and other vital system parameters on a bootable DOS based disk and regularly compare this information to the current state of your hard drive.

Notes

[1]requiring a host program:host表示“主人”、“东道主”。此处a host program可译成“主机程序”。

[2]a trigger mechanism:触发装置。

[3]shrink-wrapped retail software:用收缩塑料薄膜包装的零售软件。

[4]only to:不定式短语表示结果;翻译成“结果……”,如:He made a long speech only to show his ignorance of the subject.他讲一大段话,结果只暴露出他对这门学科一无所知。

[5]Fifth Generation Systems Fastback Plus:第五代生成系统快速备份。

[6]Fifth Generation Systems Disklock:第五代生成系统磁盘锁。

[7]Do not boot from any floppy disk. boot意指“引导”、“启动”。此句译为“不要直接从软盘启动计算机”。

Choose the best answer for each of the following:

点击查看答案

第7题

Parallel Computer Models 并行模式 Parallel processing has emerged as a key enabling technology in

Parallel Computer Models

并行模式

Parallel processing has emerged as a key enabling technology in modern computers, driven by the ever-increasing demand for higher performance, lower costs, and sustained productivity in real-life applications. Concurrent events are taking place in today's high- performance computers due to the common practice of multiprogramming, multiprocessing, or multicomputing.

Parallelism appears in various forms, such as lookahead, pipelining, vectorization, concurrency, simultaneity, data parallelism, partitioning, interleaving, overlapping, multiplicity, replication, time sharing, space sharing, multitasking, multiprogramming, multithreading, and distributed computing at different processing levels.

In this part, we model physical architectures of parallel computers, vector super- computers[1], multiprocessors, multicomputers, and massively parallel processors. Theoretical machine models are also presented, including the parallel random-access machines (PRAMs)[2]and the complexity model of VLSI (very large-scale integration) circuits. Architectural development tracks are identified with case studies in the article. Hardware and software subsystems are introduced to pave the way for detailed studies in the subsequent section.

The State of Computing

Modern computers are equipped with powerful hardware facilities driven by extensive software packages. To assess state-of-the-art[3]computing, we first review historical milestones in the development of computers. Then we take a grand tour of the crucial hardware and software elements built into modern computer systems. We then examine the evolutional relations in milestone architectural development. Basic hardware and software factors are identified in analyzing the performance of computers.

Computer Development Milestones

Computers have gone through two major stages of development: mechanical and electronic. Prior to 1945, computers were made with mechanical or electromechanical parts. The earliest mechanical computer can be traced back to 500 BC in the form of the abacus used in China. The abacus is manually operated to perform decimal arithmetic with carrying propagation digit by digit.

Blaise Pascal built a mechanical adder/subtractor in France in 1642. Charles Babbage designed a difference engine in England for polynomial evaluation in 1827. Konrad Zuse built the first binary mechanical computer in Germany in 1941. Howard Aiken[4]proposed the very first electromechanical decimal computer, which was built as the Harvard Mark I[5]by IBM in 1944. Both Zuse's and Aiken's machines were designed for general-purpose computations.

Obviously, the fact that computing and communication were carried out with moving mechanical parts greatly limited the computing speed and reliability of mechanical computers. Modern computers were marked by the introduction of electronic components. The moving parts in mechanical computers were replaced by high-mobility electrons in electronic computers. Information transmission by mechanical gears or levers was replaced by electric signals traveling almost at the speed of light.

Computer Generations

Over the past five decades, electronic computers have gone through five generations of development. Each of the first three generations lasted about 10 years. The fourth generation covered a time span of 15 years. We have just entered the fifth generation with the use of processors and memory devices with more than 1 million transistors on a single silicon chip.

The division of generations is marked primarily by sharp changes in hardware and software technologies. Most features introduced in earlier generations have been passed to later generations. In other words, the latest generation computers have inherited all the nice features and eliminated all the bad ones found in previous generations.

Elements of Modern Computers

Hardware, software, and programming elements of a modern computer system are briefly introduced below in the context of parallel processing.

Computing Problems

It has been long recognized that the concept of computer architecture is no longer restricted to the structure of the bare machine hardware. A modern computer is an integrated system consisting of machine hardware, an instruction set, system software, application programs, and user interfaces. These system elements are depicted in Fig. 1. The use of a computer is driven by real-life problems demanding fast and accurate solutions. Depending on the nature of the problems, the solutions may require different computing resources.

For numerical problems in science and technology, the solutions demand complex mathematical formulations and tedious integer or floating-point computations. For alphanumerical problems in business and government, the solutions demand accurate transactions, large database management, and information retrieval operations.

For artificial intelligence (AI) problems, the solutions demand logic inferences and symbolic manipulations. These computing problems have been labeled numerical computing, transaction processing, and logical reasoning. Some complex problems may demand a combination of these processing modes.

Algorithms and Data Structures

Special algorithms and data structures are needed to specify the computations and communications involved in computing problems. Most numerical algorithms are deterministic, using regularly structured data. Symbolic processing may use heuristics or nondeterministic searches over large knowledge bases.

Problem formulation and the development of parallel algorithms often require interdisciplinary interactions among theoreticians, experimentalists, and computer programmers. There are many books dealing with the design and mapping of algorithms or heuristics onto parallel computers. In this article, we are more concerned about the resources mapping problems than the design and analysis of parallel algorithms.

Hardware Resources

The system architecture of a computer is represented by three nested circles on the right in Fig. 1. A modern computer system demonstrates its power through coordinated efforts by hardware resources, an operating system, and application software. Processors, memory, and peripheral devices form the hardware core of a computer system. We will study instruction-set processors, memory organization, multiprocessors, supercomputers, multicomputers, and massively parallel computers.

Special hardware interfaces are often built into I/O devices, such as terminals, workstations, optical page scanners, magnetic ink character recognizers, modems, file servers, voice data entry, printers, and plotters. These peripherals are connected to mainframe computers directly or though local or wide-area networks.

In addition, software interface programs are needed. These software interfaces include file transfer systems, editors, word processors, device drivers, interrupt handlers, network communication programs, etc. These programs greatly facilitate the portability of user programs on different machine architectures.

Operating System

An effective operating system manages the allocation and deal-location of resources during the execution of user programs. We will study UNIXE[6]extensions for muhiprocessors and muhicomputers later. Mach/OS kernel and OSF/1[7]will be specially studied for muhithreaded kernel functions, virtual memory management, file subsystem, and network communication services. Beyond the OS, application software must be developed to benefit the users. Standard benchmark programs are needed for performance evaluation.

Notes

[1] vector super-computers: 向量巨型机体系机构。向量巨型计算机的体系机构,目前大多数仍为多流水线结构,也有的采用并行处理机构。

[2] parallel random-access machines(PRAMs):并行随机存取机器具有任意多个处理器,以及分别用于输入、输出和工作的存储器的机器模型。

[3] state-of-the-art:最新技术水平;当前正在发展的技术,或者在当前应用中保持领先地位的技术。

[4] Howard Aiken: Mark I计算机的设计者。

[5] Harvard Mark I:哈佛Mark I计算机。Mark I计算机是一种在30年代末40年代初由(美国)哈佛大学的Howard Aiken设计并由IBM公司制造的机电式计算器。

[6] UNIX:UNIX操作系统。

[7] Mach/OS kernel and OSF/1:Mach操作系统/OS操作系统,Kernel核心程序。在操作系统中,实现诸如分配硬件资源、进程调度等基本功能的程序,是与硬件机器直接打交道的部分,始终驻留内存。OSF/1开放软件基金会/1。

Choose the best answer for each of the following:

点击查看答案

第8题

A recent development is the local area network (LAN).【21】its name implies, it【22】a local a

A recent development is the local area network (LAN).【21】its name implies, it【22】a local area—possibly as small as a single room, typically an area like an university campus or the premises of a particular business. Local area networks were developed to【23】a need specific to microcomputers—the sharing of expensive resources. Microcomputers are cheap,【24】highcapacity disc stores, fast and/or good quality printers, etc. are expensive. The object of the LAN is to allow【25】microcomputers shared access to these expensive resources. Since the microcomputers are【26】, it is a necessary feature of a LAN that the method of connection to the network, and the network hardware【27】, must also be cheap.

A local area network links a number of computers and a number of sewers【28】provide communal facilities, e. g. file storage. (A server usually includes a small microprocessor for control purposes.) The computers and servers are known【29】stations. There are two methods of【30】in common use, tings and broadcast networks.

In the ring method(often called a Cambridge Ring)all the stations are linked in a ring,【31】includes one special station, the monitor station.

In broadcast networks, all the stations are【32】to a single linear cable (usually co-ax cable), and any transmission will be received by all stations.

【33】technology is used, local area networks are a development of the greatest importance.【34】as programming is simplified by an approach that thinks in terms of small procedures or programs, each doing a well-defined job, the computer system of tomorrow is likely to be【35】lots of small systems, each doing a specific job, linked by a local area network.

(41)

A.As

B.Like

C.Since

D.Because

点击查看答案

第9题

Programs and Programming 程序与编程 Computer programs, which are also called software, are instruc

Programs and Programming

程序与编程

Computer programs, which are also called software, are instructions that cause the hardware-the machines-to do work. Software as a whole can be divided into a number of categories based on the types of work done by programs. The two primary software categories are operating systems (system software), which controls the working of the computer, and application software, which addresses the multitude of tasks for which people use computers. System software, thus, handles such essential, but often invisible, chores as maintaining disk files and managing the screen. whereasc[1]application software performs word processing, database management, and the like. Two additional categories that are neither system nor application software, although they contain elements of both, are network software, which enables groups of computers to communicate, and Ianguage software, which provides programmers with the tools they need to write programs. In addition to these task_based[2]categories, several types of software are described based on their method of distribution. These include the so-called canned programs or packaged software, developed and sold primarily through retail ourlets; freeware and public domain software, which is made available without cost by its developer; shareware, which is similar to freeware but usually carries a small fee for those who like the program; and the infamous vaporware, which is software that either does not reach the market or appears much later than promised.

Operating Systems

Different types of peripheral devices, disk drives, printers, communications networks, and so on handle and store data differently from the way the computer handles and stores it. Internal operating systems, usually stored in ROM memory,[3]were developed primarily to coordinate and translate data flows from dissimilar sources, such as disk drives or coprocessors (processing chips that perform simultaneous but different operations from the central unit). An operating system is a master control program, permanently stored in memory, that interprets user commands requesting various kinds of services, such as display, print, or copy a data file, list all files in a directory, or execute a particular program.

Application

Application is a computer program designed to help people perform a certain type of work. An application, thus. differs from an operating system (which runs a computer), a utility (which performs maintenance or general purpose chores), and a language (with which computer programs are created). Depending on the work for which it was designed, an application can manipulate text, numbers, graphics, or a combination of these elements. Some application packages offer considerable computing power by focusing on a single task, such as Wordpad[4]; others, called integrated software, offer somewhat less power but include several applications, such as Winword, Excel and Foxpro.

Programming

A program is a sequence of instructions that tells the hardware of a computer what operations to perform on data. Programs can be built into the hardware itself, or they may exist independently in a form known as software. In some specialized, or-dedicated- computers the operating instructions are embedded in their circuitry; common examples are the microcomputers found in calculators, wristwatches, automobile engines, and microwave ovens. A general purpose computer, on the other hand, contains some built-in programs (in ROM) or instructions (in the processor chip), but it depends on external programs to perform useful tasks. Once a computer has been programmed, it can do only as much or as little as the software controlling it at any given moment enables it to do. Software in widespread use includes a wide range of applications programs-instructions to the computer on how to perform various tasks.

1. Application Program Interface

Application Program Interface is a set of routines that an application program uses to request and carry out lower level services performed by a computer's operating system. An application program carries out two types of tasks: those related to work being performed, such as accepting text or numbers input to a document or spreadsheet, and those related to maintenance chores, such as managing files and displaying information on the screen. These maintenance chores are performed by the computer's operating system, and an application program interface (API) provides the program with a means of communicating with the system, telling it which system level task to perform and when. On computers running a graphical user interface such as that on the Apple Macintosh, an API also helps application programs manage Window menus, icons, and so on. On local area networks, an API, such as IBMs NetBIOS, provides applications with a uniform means of requesting services from the lower levels of the network.

2. Word Processor

Word Processor is an application program for manipulating text-based documents; the electronic equivalent of paper, pen, typewriter, eraser, and most likely, dictionary and thesaurus. Word processors run the gamut from simple through complex,[5]but all ease the tasks associated with editing documents (deleting, inserting, rewording, and so on). Depending on the program and the equipment in use, word processors can display documents either in text mode, using highlighting, underlining, or color to represent italics, boldfacing, and other such formatting, or in graphics mode, wherein formatting and, sometimes, a variety of fonts appear on the screen as they will on the printed page. All word processors offer at least limited facilities for document formatting, such as font changes, page layout, paragraph indention, and the like. Some word processors can also check spelling, find synonyms, incorporate graphics created with another program, correctly align mathematical formulas, create and print form letters, perform calculations, display documents in multiple on screen windows, and enable users to record macros that simplify difficult or repetitive operations.

Notes

[1]whereas: 连接词,表示对比,翻译成“而”。如:We are working, whereas they are playing我们在干活,而他们却在玩。

[2]task-based: 以任务为依据的,基于任务的。

[3]ROM memory: ROM是read-only memory的简写形式,只读存储器。

[4]Wordpad, Winword, Excel, and Foxpro:一些应用软件的名字,分别用于文字处理、电子表格和数据库。

[5]Word processors run the gamut from simple through complex. 文字处理软件负责从简单到复杂的所有工作。

点击查看答案

第10题

Computer Languages 计算机语言 A computer must be given instructions in a language that it understa

Computer Languages

计算机语言

A computer must be given instructions in a language that it understands, that is, a particular pattern of binary digital information. On the earliest computers, programming was a difficult, laborious task, because vacuum tube ON/OFF switches had to be set by hand. Teams of programmers often took days to program simple tasks, such as sorting a list of names. Since that time a number of computer languages have been devised, some with particular kinds of functioning in mind and others aimed more at ease of use-the user-friendly approach.

Machine Language

Unfortunately, the computer's own binary based language, or machine language, is difficult for humans to use. The programmer must input every command and all data in binary form, and a basic operation such as comparing the contents of a register to the data in a memory chip location might look like this: 11001010 00010111 11110101 00101011. Machine language programming is such a tedious, time-consuming task that the time saved in running the program rarely justifies the days or weeks needed to write the program.

Assembly Language

One method programmers devised to shorten and simplify the process is called assembly language programming. By assigning a short (usually three letter) mnemonic code to each machine language command, assembly language programs could be written and-debugged-cleaned of logic and date errors-in a fraction of the time needed by machine language programmers. In assembly language, each mnemonic command and its symbolic operands equals one machine instruction. An assembler program translates the mnemonic opcodes (operation codes) and symbolic operands into binary language and executes the program. Assembly language is a type of low level computer programming language in which each statement corresponds directly to a single machine instruction. Assembly languages are, thus, specific to a given processor. After writing an assembly language program, the programmer must use the assembler language into machine code. Assembly language provides precise control of the computer, but assembly language programs written for one type of computer must be rewritten to operate on another type. Assembly language might be used instead of a high levcl language for any of three major reasons: speed, control, and preference. Programs written in assembly language usually run faster than those generated by a compiler; use of assembly language lets a programmer interact directly with the hardware (processor, memory, display, and input/output ports). Assembly language, however, can be used only with one type of CPU chip or microprocessor. Programmers who expended much time and effort to learn how to program one computer had to learn a new programming style each time they worked on another machine. What was needed was a shorthand method by which one symbolic statement could represent a sequence of many machine language instructions, and a way that would allow the same program to run on several types of machines. These needs led to the development of so-called high level languages.

High Level Languages

High level languages often use English-Iike words-for example, LIST, PRINT, OPEN, and so on-as commands that might stand for a sequence of tens or hundreds of machine language instructions. The commands are entered from the keyboard or from a program in memory or in a storage device, and they are interpreted by a program that translates them into machine language instructions.

Translator programs are of two kinds: interpreters and compilers. With an interpreter, programs that loop back to reexecute part of their instructions reinterpret the same instructions each time it appears, so interpreted programs run much more slowly than machine language programs. Compilers, by contrast, translate an entire program into machine language prior to execution, so such programs run as rapidly as though they were written directly in machine language.

American computer scientist Grace Hopper is credited with implementing the first commercially oriented computer language. After programming an experimental computer at Harvard University[1], she worked on the UNIVAC[2]I and II computers and developed a commercially usable high level programming language called FLOW MATIC to facilitate computer use in scientific applications. IBM[3]then developed a language that would simplify work involving complicated mathematical formulas. Begun in 1954 and completed in 1957, FORTRAN (FORmula TRANslator)[4]was the first comprehensive high level programming language that was widely used. In 1957, the Association for Computing Machinery[5]set out to develop a universal language that would correct some of FORTRAN' s perceived faults. A year later, they released ALGOL[6](ALGOrithmic Language), another scientifically oriented language; widely used in Europe in the 1960s and 1970s, it has since been superseded by newer languages, while FORTRAN continues to be used because of the huge investment in existing programs. COBOL[7](COmmon Business Oriented Language), a commercial and business programming language, concentrates on data organization and file handling and is widely used today in business.

BASIC[8](Beginners All-purpose Symbolic Instruction Code) was developed at Dartmouth College in the early 1960s for use by nonprofessional computer users. The language came into almost universal use with the microcomputer explosion of the 1970s and 1980s. Condemned as slow, inefficient, and inelegant by its detractors, BASIC is nevertheless simple to learn and easy to use. Because many early microcomputers were sold with BASIC built into the hardware (in ROM memory) the language rapidly came into widespread use. As a very simple example of a BASIC program, consider the addition of the numbers 1 and 2, and the display of the result. This is written as follows (the numerals 10-40 are line numbers):

10 A=1

20 B=2

30 C=A+B

40 PRINT C

Although hundreds of different computer languages and variants exist, several others deserve mention. PASCAL[9], originally designed as a teaching tool, is now one of the most popular microcomputer languages. LOGO was developed to introduce children to computers. C, a language Bell Laboratories designed in the 1970s, is widely used in developing systems programs, such as language translators. LISP[10]and PROLOG are widely used in artificial intelligence.

COBOL

COBOL, in computer science, acronym for COmmon Business-oriented language, is a verbose, English-like programming language developed between 1959 and 1961. Its establishment as a required language by the U. S. Department of Defense, its emphasis on data structures. and its English-like syntax (compared to those of FORTRAN and ALGOL) led to its widespread acceptance and usage, especially in business applications. Programs written in COBOL, which is a compiled language, are split into four divisions: Identification, Environment, Data, and Procedure. The Identification division specifies the name of the program and contains any other documentation the programmer wants to add. The Environment division specifies the computer(s) being used and the files used in the program for input and output. The Data division describes the data used in the program. The Procedure division contains the procedures that dictate the actions of the program.

C & C++

A widely used programming language, C was developed by Dennis Ritchie at Bell Laboratories in 1972; it was so named because its immediate predecessor was the B programming language. Although C is considered by many to be more a machine independent assembly language than a high level language, its close association with the UNIX[11]operating system, its enormous popularity, and its standardization by the American National Standards Institute (ANSl)[12]have made it perhaps the closest thing to a standard programming language in the microcomputer/workstation marketplace. C is a compiled language that contains a small set of built in functions that are machine dependent. The rest of the C functions are machine independent and are contained in libraries that can be accessed from C programs. C programs are composed of one or more functions defined by the programmer; thus, C is a structured programming language. C+ +, in computer science, is an object oriented version of the C programming language, developed by Bjarne Stroustrup in the early 1980s at Bell Laboratories and adopted by a number of vendors, including Apple Computer, Sun Microsystems, Borland International, and Microsoft Corporation.

Notes

[1]Harvard University:美国哈佛大学。

[2]UNIVAC(Universal Automatic Computer):通用自动计算机。

[3]IBM(International Business Machine Corp):国际商用机器公司。

[4]FORTRAN(FORmula TRANslator):公式翻译程序设计语言。

[5]the Association for Computing Machinery:计算机协会(美国)。

[6]ALGOL(ALGOrithmic Language):面向代数的语言。

[7]COBOL(Common Business Oriented Language):面向商业的通用语言。

[8]BASIC(Beginners All-purpose Symbolic Instruction Code):初学者通用符号指令码。

[9]PASCAL(Philips Automatic Sequence Calculator):菲利浦自动顺序计算机语言。

[10]LISP(List Process):表处理程序,或表处理语言。

[11]UNIX(Uniplexed Information and Computer Systems):UNIX操作系统,1969年在

AT&T Bell实验室开发的多用户多任务操作系统。

[12]ANSI(American National Standards Institute):美国国家标准学(协)会。

点击查看答案

第11题

Distributed Systems 分布系统 Computer systems are undergoing a revolution. From 1945, when the mod

Distributed Systems

分布系统

Computer systems are undergoing a revolution. From 1945, when the modern computer era began, until about 1985, computers were large and expensive. Even minicomputers normally cost tens of thousands of dollars each. As a result, most organizations had only a handful of computers, and for lack of a way to connect them, they operated independently from one another.

Starting in the mid 1980s, however, two advances in technology began to change that situation. The first was the development of powerful microprocessors. Initially, these were 8 bit machines, but soon 16, 32, and even 64 bit CPUs became common. Many of these had the computing power of a decent-sized mainframe (i. e. large) computer, but for a fraction of the price.

The amount of improvement that has occurred in computer technology in the past half century is truly staggering and totally unprecedented in other industries. From a machine that cost 10 million dollars and executed 1 instruction per second, we have come to machines that cost 1,000 dollars and execute 10 million instructions per second, a price/ performance gain of 1011. If cars had improved at this rate in the same time period, a Roll Royce would now cost 10 dollars and get a billion miles per gallon. (Unfortunately, it would probably also have a 200 page manual telling how to open the door.) The second development was the invention of high speed computer networks. The local area networks, or LANs, allow dozens, or even hundreds, of machines within a building to be connected in such a way that small amounts of information can be transferred between machines in a millisecond or so. Larger amounts of data can be moved between machines at rates of 10 to 100 million bits/sec and sometimes more. The wide area networks, or WANs, allow millions of machines all over the earth to be connected at speeds varying from 64Kbps (kilobits per second) to gigabits per second for some advanced experimental networks.

The result of these technologies is that it is now not only feasible, but easy, to put together computing systems composed of large numbers of CPUs connected by a high speed network. They are usually called distributed systems, in contrast to the previous centralized systems (or single processor systems) consisting of a single CPU, its memory, peripherals, and some terminals.

There is only one fly in the ointment[1]: software. Distributed systems need radically different software than centralized systems do. In particular, the necessary operating systems are only beginning to emerge. The first few steps have been taken, but there is still a long way to go. Nevertheless, enough is already known about these distributed operating systems that we can present the basic ideas.

What Is a Distributed System?

Various definitions of distributed systems have been given in literature, none of them satisfactory and none of them in agreement with any of the others. For our purposes it is sufficient to give a loose characterization.

A distributed system is a collection of independent computers that appear to the users of the system as a single computer.

This definition has two aspects. The first one deals with hardware: the machines are autonomous. The second one deals with software: the users think of the system as a single computer. Both are essential.

Rather than going further with definitions, it is probably more helpful to give several examples of distributed systems. As a first example, consider a network of workstations in a university or company department. In addition to each user's personal workstation, there might be a pool of processors in the machine room that are not assigned to specific users but are allocated dynamically as needed. Such a system might have a single file system, with all files accessible from all machines in the same way and using the same path name. Furthermore, when a user typed a command, the system could look for the best place to execute that command, possibly on the user's own workstation, possibly on an idle workstation belonging to someone else, and possibly on one of the unassigned processors in the machine room. If the system as a whole looked and acted like a classical single processor timesharing system, it would qualify as a distributed system.

As a second example, consider a factory full of robots, each containing a powerful computer for handling vision, planning, communication, and other tasks. When a robot on the assembly line notices that a part it is supposed to install is defective, it asks another robot in the parts department to bring it a replacement. If all the robots act like peripheral devices attached to the same central computer and the system can be programmed that way, it too counts as a distributed system.

As a final example, think about a large bank with hundreds of branch offices all over the world. Each office has a master computer to store local accounts and handle local transactions. In addition, each computer has the ability to talk to all other branch computers and with a central computer at headquarters. If transactions can be done without regard to where a customer or account is, and the users do not notice any difference between this system and the old centralized mainframe that it replaced, it too would be considered a distributed system.

Advantages of Distributed Systems over Centralized Systems

The real driving force behind the trend toward decentralization is economics. A quarter of a century ago, computer pundit and gadfly Herb Grosch stated what later came to be known as Grosch's law: the computing power of a CPU is proportional to the square of its price. By paying twice as much, you could get four times the performance. This observation fit the mainframe technology of its time quite well, and led most organizations to buy the largest single machine they could afford.

With microprocessor technology, Grosch's law no longer holds. For a few hundred dollars you can get a CPU chip that can execute more instructions per second than one of the largest 1980s mainframes. If you are willing to pay twice as much, you get the same CPU, but running at a somewhat higher clock speed. As a result, the most cost effective solution is frequently to harness a large number of cheap CPUs together in a system. Thus, the leading reason for the trend toward distributed systems is that these systems potentially have a much better price/performance ratio than a single large centralized system would have. In effect, a distributed system gives more bang for the buck[2].

A slight variation on this theme is the observation that a collection of microprocessors cannot only give a better price/performance ratio than a single mainframe, but may yield an absolute performance that no mainframe can achieve at any price. For example, with current technology it is possible to build a system from 10,000 modern CPU chips, each of which runs at 50 MIPS (Millions of Instructions Per Second), for a total performance of 500,000MIPS. For a single processor (i. e. CPU) to achieve this, it would have to execute an instruction in 0. 002 nsec (2 picosec). No existing machine even comes close to this, and both theoretical and engineering considerations make it unlikely that any machine ever will. Theoretically, Einstein's theory of relativity dictates that nothing can travel faster than light, which can cover only 0.6 mm in 2 picosec. Practically, a computer of that speed fully contained a 0.6 mm cube would generate so much heat that it would melt instantly. Thus, whether the goal is normal performance at low cost or extremely high performance at greater cost, distributed systems have much to offer.

As an aside, some authors make a distinction between distributed systems, which are designed to allow many users to work together, and parallel systems, whose only goal is to achieve maximum speedup on a single problem, as our 500,000 MIPS machine might. We believe that this distinction is difficult to maintain because the design spectrum is really a continuum. We prefer to use the term "distributed system" in the broadest sense to denote any system in which multiple interconnected CPUs work together.

A next reason for building a distributed system is that some applications are inherently distributed. A supermarket chain might have many stores, each of which gets goods delivered locally (possibly from local farms), makes local sales, and makes local decisions about which vegetables are so old or rotten that they must be thrown out. It therefore makes sense to keep track of inventory at each store on a local computer rather than centrally at corporate headquarters. After all, most queries and updates will be done locally. Nevertheless, from time to time, top management may want to find out how many rutabagas it currently owns. One way to accomplish this goal is to make the complete system look like a single computer to the application programs, but implement decentrally, with one computer per store as we have described. This would then be a commercial distributed system.

Notes

[1] There is only one fly in the ointment. 美中不足。

[2] gives more bang for the buck: buck,俚语,表示—美元。这句的意思是“小钱办大事”。

点击查看答案
下载APP
关注公众号
TOP
重置密码
账号:
旧密码:
新密码:
确认密码:
确认修改
购买搜题卡查看答案 购买前请仔细阅读《购买须知》
请选择支付方式
  • 微信支付
  • 支付宝支付
点击支付即表示同意并接受了《服务协议》《购买须知》
立即支付 系统将自动为您注册账号
已付款,但不能查看答案,请点这里登录即可>>>
请使用微信扫码支付(元)

订单号:

遇到问题请联系在线客服

请不要关闭本页面,支付完成后请点击【支付完成】按钮
遇到问题请联系在线客服
恭喜您,购买搜题卡成功 系统为您生成的账号密码如下:
重要提示:请勿将账号共享给其他人使用,违者账号将被封禁。
发送账号到微信 保存账号查看答案
怕账号密码记不住?建议关注微信公众号绑定微信,开通微信扫码登录功能
请用微信扫码测试
优题宝