网友您好, 请在下方输入框内输入要搜索的题目:

题目内容 (请给出正确答案)
资料:Acknowledging that so-called cloud computing will blur the distinctions between computers and networks, about two dozen big information technology companies plan to announce a new standards-setting group for computer networking. The group, to be called the Open Networking Foundation, hopes to help standardize a set of technologies pioneered at Stanford and the University of California, Berkeley, and meant to make small and large networks programmable in much the same way that individual computers are.
  The changes, if widely adopted, would have implications for global telecommunications networks and large corporate data centers, but also for small household networks. The benefits, proponents say, would be more flexible and secure networks that are less likely to suffer from congestion. Someday, they say, networks might even be less expensive to build and operate. The new approach could allow for setting up on-demand "express lanes" for voice and data traffic that is time-sensitive. Or it might let big telecommunications companies, like Verizon or AT&T, use software to combine several fiber optic backbones temporarily for particularly heavy information loads and then have them automatically separate when a data rush hour is over. For households, the new capabilities might let Internet service providers offer remote services like home security or energy control.
  The foundation's organizers also say the new technologies will offer ways to improve computer security and could possibly enhance individual privacy within the e-commerce and social networking markets. Those markets are the fastest-growing uses for computing and network resources. While the new capabilities could be crucial to network engineers, for business users and consumers the changes might be no more noticeable than advances in plumbing, heating and air-conditioning. Everything might work better, but most users would probably not know- or care- why or how.
  The members of the Open Networking Foundation will include Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, Hewlett-Packard, I.B.M., Juniper, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMWare and Yahoo. "This answers a question that the entire industry has had, and that is how do you provide owners and operators of large networks with the flexibility of control that they want in a standardized fashion." said Nick McKeown, a professor of electrical engineering and computer science at Stanford, where his and colleagues' work forms part of the technical underpinnings, called OpenFlow.
   The effort is a departure from the traditional way the Internet works. As designed by military and academic experts in the 1960s, the Internet has been based on interconnected computers that send and receive packets of data, paying little heed to the content and making few distinctions among the various types of senders and receivers of information. The intelligence in the original Internet was meant to reside largely at the end points of the network-the computers-while the specialized routing computers were relatively dumb post offices of various size, mainly confined to reading addresses and transferring packets of data to adjacent systems. But these days, when cloud computing means a lot of the information is stored and processed on computers out on the network, there is growing need for more intelligent control systems to orchestrate the behavior of thousands of routing machines. It will make it possible, for example, for managers of large networks to program their network to prioritize certain types of data, perhaps to ensure quality of service or to add security to certain portions of a network. The designers argue that because OpenFlow should open up hardware and software systems that control the flow of Internet data packets, systems that have been closed and proprietary, it will cause a new round of innovation focused principally upon the vast computing systems known as cloud computers.
  "Orchestrate"in Para.5 probably means______

A.harmonize
B.comply
C.integrate
D.conform

参考答案

参考解析
解析:本题的问题是“第五自然段的‘Orchestrate’可能的含义是______”。根据主题句可知,"Orchestrate"意为“协调”,故选A。
更多 “资料:Acknowledging that so-called cloud computing will blur the distinctions between computers and networks, about two dozen big information technology companies plan to announce a new standards-setting group for computer networking. The group, to be called the Open Networking Foundation, hopes to help standardize a set of technologies pioneered at Stanford and the University of California, Berkeley, and meant to make small and large networks programmable in much the same way that individual computers are.   The changes, if widely adopted, would have implications for global telecommunications networks and large corporate data centers, but also for small household networks. The benefits, proponents say, would be more flexible and secure networks that are less likely to suffer from congestion. Someday, they say, networks might even be less expensive to build and operate. The new approach could allow for setting up on-demand "express lanes" for voice and data traffic that is time-sensitive. Or it might let big telecommunications companies, like Verizon or AT&T, use software to combine several fiber optic backbones temporarily for particularly heavy information loads and then have them automatically separate when a data rush hour is over. For households, the new capabilities might let Internet service providers offer remote services like home security or energy control.   The foundation's organizers also say the new technologies will offer ways to improve computer security and could possibly enhance individual privacy within the e-commerce and social networking markets. Those markets are the fastest-growing uses for computing and network resources. While the new capabilities could be crucial to network engineers, for business users and consumers the changes might be no more noticeable than advances in plumbing, heating and air-conditioning. Everything might work better, but most users would probably not know- or care- why or how.   The members of the Open Networking Foundation will include Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, Hewlett-Packard, I.B.M., Juniper, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMWare and Yahoo. "This answers a question that the entire industry has had, and that is how do you provide owners and operators of large networks with the flexibility of control that they want in a standardized fashion." said Nick McKeown, a professor of electrical engineering and computer science at Stanford, where his and colleagues' work forms part of the technical underpinnings, called OpenFlow.    The effort is a departure from the traditional way the Internet works. As designed by military and academic experts in the 1960s, the Internet has been based on interconnected computers that send and receive packets of data, paying little heed to the content and making few distinctions among the various types of senders and receivers of information. The intelligence in the original Internet was meant to reside largely at the end points of the network-the computers-while the specialized routing computers were relatively dumb post offices of various size, mainly confined to reading addresses and transferring packets of data to adjacent systems. But these days, when cloud computing means a lot of the information is stored and processed on computers out on the network, there is growing need for more intelligent control systems to orchestrate the behavior of thousands of routing machines. It will make it possible, for example, for managers of large networks to program their network to prioritize certain types of data, perhaps to ensure quality of service or to add security to certain portions of a network. The designers argue that because OpenFlow should open up hardware and software systems that control the flow of Internet data packets, systems that have been closed and proprietary, it will cause a new round of innovation focused principally upon the vast computing systems known as cloud computers.   "Orchestrate"in Para.5 probably means______ A.harmonize B.comply C.integrate D.conform” 相关考题
考题 Today, several advances in computer network technology are helping companies to extend the use of computers to the procurement, production and distribution processes.

考题 Cloud computing is a type of Internetbased computing that provides shared computer processing resources and data to computers and other devices on demand. Advocates claim that cloud computing allows companies to avoid upfront infrastructure costs. Cloud computing now has few service form, but it is not including()AIaaSB.PaaSC.SaaSD.DaaS

考题 ()represents the information assets characterized by such a High volume. Velocity and Variety to require specific Technology and Analytical Methods for its transformation into Value.A.Internet plusB.IndustryC.Big dataD.Cloud computing

考题 ( ) :refers to the application of the internet and other informatio technology inconventional industries.It is an incomplete equation where various Internets (mobile Internet,cloud computing,big data or Internet of things) can be added to other fileds, fostering new industries and business development.A.internet plusB.industry 4.0C.Big dataD.Cloud computing

考题 ( ) refers to the application of the Internet and other information techno1ogy in conventiona1 industries. It is an incomplete equation where various Internets (mobi1e Internet ,cloud computing ,big data or Internet of Things)can be added to other fields, fostering new industries and business development.A.Internet plusB.Industry 4.0C.Big dataD.Cloud computing

考题 ( )refers to the application of the internet and other information technology inconventional industries .It is an incomplete equation where various internet(mobile internet,cloud computing,big data or internet of things) can be added to other fileds, fostering new industries and business development.A. internet plus B.industry 4.0 C.big data D.clouD.computing

考题 represents the information assets characterized by such a High volume,Velocity and Variety to require specific Technology and Analytical Methods for its transformation into Value.A.Intermetplus B.Industry4.0 C.Big data D.Cloud computing

考题 资料:Acknowledging that so-called cloud computing will blur the distinctions between computers and networks, about two dozen big information technology companies plan to announce a new standards-setting group for computer networking. The group, to be called the Open Networking Foundation, hopes to help standardize a set of technologies pioneered at Stanford and the University of California, Berkeley, and meant to make small and large networks programmable in much the same way that individual computers are.   The changes, if widely adopted, would have implications for global telecommunications networks and large corporate data centers, but also for small household networks. The benefits, proponents say, would be more flexible and secure networks that are less likely to suffer from congestion. Someday, they say, networks might even be less expensive to build and operate. The new approach could allow for setting up on-demand "express lanes" for voice and data traffic that is time-sensitive. Or it might let big telecommunications companies, like Verizon or AT&T, use software to combine several fiber optic backbones temporarily for particularly heavy information loads and then have them automatically separate when a data rush hour is over. For households, the new capabilities might let Internet service providers offer remote services like home security or energy control.   The foundation's organizers also say the new technologies will offer ways to improve computer security and could possibly enhance individual privacy within the e-commerce and social networking markets. Those markets are the fastest-growing uses for computing and network resources. While the new capabilities could be crucial to network engineers, for business users and consumers the changes might be no more noticeable than advances in plumbing, heating and air-conditioning. Everything might work better, but most users would probably not know- or care- why or how.   The members of the Open Networking Foundation will include Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, Hewlett-Packard, I.B.M., Juniper, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMWare and Yahoo. "This answers a question that the entire industry has had, and that is how do you provide owners and operators of large networks with the flexibility of control that they want in a standardized fashion." said Nick McKeown, a professor of electrical engineering and computer science at Stanford, where his and colleagues' work forms part of the technical underpinnings, called OpenFlow.    The effort is a departure from the traditional way the Internet works. As designed by military and academic experts in the 1960s, the Internet has been based on interconnected computers that send and receive packets of data, paying little heed to the content and making few distinctions among the various types of senders and receivers of information. The intelligence in the original Internet was meant to reside largely at the end points of the network-the computers-while the specialized routing computers were relatively dumb post offices of various size, mainly confined to reading addresses and transferring packets of data to adjacent systems. But these days, when cloud computing means a lot of the information is stored and processed on computers out on the network, there is growing need for more intelligent control systems to orchestrate the behavior of thousands of routing machines. It will make it possible, for example, for managers of large networks to program their network to prioritize certain types of data, perhaps to ensure quality of service or to add security to certain portions of a network. The designers argue that because OpenFlow should open up hardware and software systems that control the flow of Internet data packets, systems that have been closed and proprietary, it will cause a new round of innovation focused principally upon the vast computing systems known as cloud computers.   What is the main purpose of the Open NetwoA.To make networks less expensive to build and operate B.To enhance the capabilities of network engineers C.To set new standards for computer networking D.To promote cloud computing

考题 资料:Acknowledging that so-called cloud computing will blur the distinctions between computers and networks, about two dozen big information technology companies plan to announce a new standards-setting group for computer networking. The group, to be called the Open Networking Foundation, hopes to help standardize a set of technologies pioneered at Stanford and the University of California, Berkeley, and meant to make small and large networks programmable in much the same way that individual computers are.   The changes, if widely adopted, would have implications for global telecommunications networks and large corporate data centers, but also for small household networks. The benefits, proponents say, would be more flexible and secure networks that are less likely to suffer from congestion. Someday, they say, networks might even be less expensive to build and operate. The new approach could allow for setting up on-demand "express lanes" for voice and data traffic that is time-sensitive. Or it might let big telecommunications companies, like Verizon or AT&T, use software to combine several fiber optic backbones temporarily for particularly heavy information loads and then have them automatically separate when a data rush hour is over. For households, the new capabilities might let Internet service providers offer remote services like home security or energy control.   The foundation's organizers also say the new technologies will offer ways to improve computer security and could possibly enhance individual privacy within the e-commerce and social networking markets. Those markets are the fastest-growing uses for computing and network resources. While the new capabilities could be crucial to network engineers, for business users and consumers the changes might be no more noticeable than advances in plumbing, heating and air-conditioning. Everything might work better, but most users would probably not know- or care- why or how.   The members of the Open Networking Foundation will include Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, Hewlett-Packard, I.B.M., Juniper, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMWare and Yahoo. "This answers a question that the entire industry has had, and that is how do you provide owners and operators of large networks with the flexibility of control that they want in a standardized fashion." said Nick McKeown, a professor of electrical engineering and computer science at Stanford, where his and colleagues' work forms part of the technical underpinnings, called OpenFlow.    The effort is a departure from the traditional way the Internet works. As designed by military and academic experts in the 1960s, the Internet has been based on interconnected computers that send and receive packets of data, paying little heed to the content and making few distinctions among the various types of senders and receivers of information. The intelligence in the original Internet was meant to reside largely at the end points of the network-the computers-while the specialized routing computers were relatively dumb post offices of various size, mainly confined to reading addresses and transferring packets of data to adjacent systems. But these days, when cloud computing means a lot of the information is stored and processed on computers out on the network, there is growing need for more intelligent control systems to orchestrate the behavior of thousands of routing machines. It will make it possible, for example, for managers of large networks to program their network to prioritize certain types of data, perhaps to ensure quality of service or to add security to certain portions of a network. The designers argue that because OpenFlow should open up hardware and software systems that control the flow of Internet data packets, systems that have been closed and proprietary, it will cause a new round of innovation focused principally upon the vast computing systems known as cloud computers.   It can be inferred from the passage that____A.The open networking foundation will be led by Stanford and the university of California, Berkeley. B.With the setting of new standards, operators of large networks will have more flexibility of control. C.People will have a better understanding of the distinctions between computers and networks thanks to cloud computing. D.Cloud computing will involve more routing computers than the traditional internet.

考题 资料:Acknowledging that so-called cloud computing will blur the distinctions between computers and networks, about two dozen big information technology companies plan to announce a new standards-setting group for computer networking. The group, to be called the Open Networking Foundation, hopes to help standardize a set of technologies pioneered at Stanford and the University of California, Berkeley, and meant to make small and large networks programmable in much the same way that individual computers are.   The changes, if widely adopted, would have implications for global telecommunications networks and large corporate data centers, but also for small household networks. The benefits, proponents say, would be more flexible and secure networks that are less likely to suffer from congestion. Someday, they say, networks might even be less expensive to build and operate. The new approach could allow for setting up on-demand "express lanes" for voice and data traffic that is time-sensitive. Or it might let big telecommunications companies, like Verizon or AT&T, use software to combine several fiber optic backbones temporarily for particularly heavy information loads and then have them automatically separate when a data rush hour is over. For households, the new capabilities might let Internet service providers offer remote services like home security or energy control.   The foundation's organizers also say the new technologies will offer ways to improve computer security and could possibly enhance individual privacy within the e-commerce and social networking markets. Those markets are the fastest-growing uses for computing and network resources. While the new capabilities could be crucial to network engineers, for business users and consumers the changes might be no more noticeable than advances in plumbing, heating and air-conditioning. Everything might work better, but most users would probably not know- or care- why or how.   The members of the Open Networking Foundation will include Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, Hewlett-Packard, I.B.M., Juniper, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMWare and Yahoo. "This answers a question that the entire industry has had, and that is how do you provide owners and operators of large networks with the flexibility of control that they want in a standardized fashion." said Nick McKeown, a professor of electrical engineering and computer science at Stanford, where his and colleagues' work forms part of the technical underpinnings, called OpenFlow.    The effort is a departure from the traditional way the Internet works. As designed by military and academic experts in the 1960s, the Internet has been based on interconnected computers that send and receive packets of data, paying little heed to the content and making few distinctions among the various types of senders and receivers of information. The intelligence in the original Internet was meant to reside largely at the end points of the network-the computers-while the specialized routing computers were relatively dumb post offices of various size, mainly confined to reading addresses and transferring packets of data to adjacent systems. But these days, when cloud computing means a lot of the information is stored and processed on computers out on the network, there is growing need for more intelligent control systems to orchestrate the behavior of thousands of routing machines. It will make it possible, for example, for managers of large networks to program their network to prioritize certain types of data, perhaps to ensure quality of service or to add security to certain portions of a network. The designers argue that because OpenFlow should open up hardware and software systems that control the flow of Internet data packets, systems that have been closed and proprietary, it will cause a new round of innovation focused principally upon the vast computing systems known as cloud computers.    The possible benefits of the standardized tA.growing use of computing in the e-commerce market B.software for data rush hours C.improved computer security D.networks that are less expensive to build

考题 资料:Acknowledging that so-called cloud computing will blur the distinctions between computers and networks, about two dozen big information technology companies plan to announce a new standards-setting group for computer networking. The group, to be called the Open Networking Foundation, hopes to help standardize a set of technologies pioneered at Stanford and the University of California, Berkeley, and meant to make small and large networks programmable in much the same way that individual computers are.   The changes, if widely adopted, would have implications for global telecommunications networks and large corporate data centers, but also for small household networks. The benefits, proponents say, would be more flexible and secure networks that are less likely to suffer from congestion. Someday, they say, networks might even be less expensive to build and operate. The new approach could allow for setting up on-demand "express lanes" for voice and data traffic that is time-sensitive. Or it might let big telecommunications companies, like Verizon or AT&T, use software to combine several fiber optic backbones temporarily for particularly heavy information loads and then have them automatically separate when a data rush hour is over. For households, the new capabilities might let Internet service providers offer remote services like home security or energy control.   The foundation's organizers also say the new technologies will offer ways to improve computer security and could possibly enhance individual privacy within the e-commerce and social networking markets. Those markets are the fastest-growing uses for computing and network resources. While the new capabilities could be crucial to network engineers, for business users and consumers the changes might be no more noticeable than advances in plumbing, heating and air-conditioning. Everything might work better, but most users would probably not know- or care- why or how.   The members of the Open Networking Foundation will include Broadcom, Brocade, Ciena, Cisco, Citrix, Dell, Deutsche Telekom, Ericsson, Facebook, Force10, Google, Hewlett-Packard, I.B.M., Juniper, Marvell, Microsoft, NEC, Netgear, NTT, Riverbed Technology, Verizon, VMWare and Yahoo. "This answers a question that the entire industry has had, and that is how do you provide owners and operators of large networks with the flexibility of control that they want in a standardized fashion." said Nick McKeown, a professor of electrical engineering and computer science at Stanford, where his and colleagues' work forms part of the technical underpinnings, called OpenFlow.    The effort is a departure from the traditional way the Internet works. As designed by military and academic experts in the 1960s, the Internet has been based on interconnected computers that send and receive packets of data, paying little heed to the content and making few distinctions among the various types of senders and receivers of information. The intelligence in the original Internet was meant to reside largely at the end points of the network-the computers-while the specialized routing computers were relatively dumb post offices of various size, mainly confined to reading addresses and transferring packets of data to adjacent systems. But these days, when cloud computing means a lot of the information is stored and processed on computers out on the network, there is growing need for more intelligent control systems to orchestrate the behavior of thousands of routing machines. It will make it possible, for example, for managers of large networks to program their network to prioritize certain types of data, perhaps to ensure quality of service or to add security to certain portions of a network. The designers argue that because OpenFlow should open up hardware and software systems that control the flow of Internet data packets, systems that have been closed and proprietary, it will cause a new round of innovation focused principally upon the vast computing systems known as cloud computers.    Which of the following is NOT true about OpA.It deviates from the traditional Internet. B.It is meant to help with the storing and processing of information on computers. C.It is an initiative of Nick McKeown and his colleagues. D.It will trigger new innovations in the field of cloud computing.

考题 Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. Advocates claim that cloud computing allows companies to avoid up-front infrastructure costs. Cloud computing now has few service form, but it is not including ( ).A.IaaS B.PaaS C.SaaS D.DaaS

考题 ( )represents the informationassets characterized by such a High volume, Velocity and Variety to requirespecific Technology and Analytical Methods for its transformation into Value.A.Internet plus B.Industry 4.0 C.Big data D.ClouD.computing

考题 Cloud computing is atype of Internet-based computing that provides shared computer processingresources and data to computers and other devices on demand. Advocates claimthat cloud computing allows companies to avoid up-front infrastructure costs.Cloud computing now has few service form, but it is not including (). A.IaaS B.PaaS C.SaaS D.DaaS

考题 Cloud computing is a type of Internet-based computing that provides shared computer processingresources and data to computers and other devices on demand. Advocates claimthat cloud computing allows companies to avoid up-front infrastructure costs.Cloud computing now has few service form, but it is not including ( ).A.IaaS B.PaaS C.SaaS D.DaaS

考题 ()is a computer technology that headsets,sometimes in combination with?physical spaces or imaginary environment.A.Virtual?Reality B.Cloud computing C.Big date D.Internet

考题 Your network consists of a single Active Directory forest. You have an Exchange Server 2003 organization.  You need to create a plan to transition the organization to Exchange Server 2010. The plan must meet the following requirements: .Ensure that e-mail messages can be sent between all users in the organization  .Ensure that administrators can modify address lists from Exchange Server 2010 servers  .Ensure that users who are moved to Exchange Server 2010 can access all public folders in theorganization What should you include in the plan?()A、two Send connectors a sharing policy address lists that use OPATHB、two Send connectors public folder replication new address listsC、a two-way routing group connector a sharing policy new address listsD、a two-way routing group connector public folder replication address lists that use OPATH

考题 In a PIM topology using anycast RP, in which two ways can the RPs share group information? ()A、Configure MSDP between RPs.B、Redistribute multicast information in your IGP.C、Advertise S, G entries using MBGP.D、Configure PIM to advertise group information between RPs.

考题 Consider the following command to add a new disk group called "tdgroupA" with two failover groups: CREATE DISKGROUP tdgroupA NORMAL REDUNDANCY FAILOVERGROUP control01 DISK ’/devices/A1’,  ’/devices/A2’,  ’/devices/A3’  FAILOVERGROUP control02 DISK ’/devices/B1’,  ’/devices/B2’,  ’/devices/B3’m  The disk "/devices/A1" is currently a member disk of a disk group by the name "tdgroup1".  Which task would be accomplished by the command?()A、 This command would result in an error because a disk group can have only one failover group.B、 This command would result in an error because the /devices/A1 disk is a member of another disk group tdgroup1.C、 A new disk group called tdgroupA will be added with two failover groups and the /devices/A1 disk will get reattached to the new disk group without being detached from the existing one.D、 A new disk group called tdgroupA will be added with two failover groups and the /devices/A1 disk will be ignored for the new disk group because it is a member of an existing disk group tdgroup1.E、 A new disk group called tdgroupA will be added with two failover groups and the /devices/A1 disk gets detached from the existing disk group tdgroup1 and attached to the new disk group tdgroupA

考题 You network contains 500 computers. You have a computer that runs Windows XP Professional. The computer is used to perform application testing and has Internet Information Services (IIS) installed. The computer has a group named Developers. You need to ensure that only the members of the Developers group can access the Web site.  Which two configuration changes should you perform?()A、Modify the properties of the Developers group. B、Modify the NTFS permissions of the %systemroot%/inetpub/wwwroot folder.C、From the properties of the default Web site, assign an SSL certificate. D、From the properties of the default Web site, modify the Authentication Methods. 

考题 Your network contains a server named Server1. Server1 has DirectAccess deployed. A group named Group1 is enabled for DirectAccess. Users report that when they log on to their computers, the computers are not configured to use DirectAccess. You need to ensure that the users computers are configured to use DirectAccess. What should you do first?()A、On each client computer, add Group1 to the Distributed COM Users group.B、On each client computer, add Group1 to the Network Configuration Operators group.C、From Active Directory Users and Computers, add the users user accounts to Group1.D、From Active Directory Users and Computers, add the users computer accounts to Group1.

考题 You are designing a Windows Azure application that will include two web roles.  The web roles will communicate with on-premise development computers and on-premise databases.  Web Role 1 must connect to development computers and databases.  Web Role 2 must connect to only databases.  What should you recommend?()A、 Create one endpoint group that contains the development computers and one endpoint group that contains the databases. Connect Web Role 1 to both endpoint groups. Connect Web Role 2 to only the database endpoint group.B、 Create one endpoint group that contains the development computers and databases. Connect Web Role 1 and Web Role 2 to the endpoint group.C、 Create one endpoint group that contains the development computers and one endpoint group that contains the databases. Connect the endpoint groups. Connect Web Role 1 and Web Role 2 to the development computer group.D、 Create one endpoint group that contains the development computers and databases, and connectit to Web Role 1.  Create one endpoint group that contains only the databases, and connect it to Web Role 2.

考题 You are the network administrator for your company. The network consists of a single Active Directory domain. All servers run Windows Server 2003. All client computers run either Windows XP Professional or Windows 2000 Professional. All client computer accounts are located in an organizational unit (OU) named Workstation. A written company policy states that the Windows 2000 Professional computers must not use offline folders. You create a Group Policy object (GPO) to enforce this requirement. The settings in the GPO exist for both Windows 2000 Professional computers and Windows XP Professional computers. You need to configure the GPO to apply only to Windows 2000 Professional computers.  What are two possible ways to achieve this goal?()A、 Create a WMI filter that will apply the GPO to computers that are running Windows 2000 Professional. B、 Create a WMI filter that will apply the GPO to computers that are not running Windows XP Professional.C、 Create two OUs under the Workstation OU. Place the computer accounts for the Windows XP Professional computers in one OU, and place the computer accounts for the Windows 2000 Professional computers in the other OU. Link the GPO to the Workstation OU.D、 Create a group that includes the Windows XP Professional computers. Assign the group the Deny - Generate Resultant Set of Policy(Logging) permission.E、 Create a group that includes the Windows 2000 Professional computers. Assign the group the Deny - Apply Group Policy permission.

考题 Your network contains a Windows Server Update Services (WSUS) server named Server1. Server1 provides updates to client computers in two sites named Site1 and Site2. A WSUS computer group named Group1 is configured for automatic approval.You need to ensure that new client computers in Site2 are automatically added to Group1. Which two actions should you perform?()A、Create a new automatic approval update rule.B、Modify the Computers Options in the Update Services console.C、Modify the Automatic Approvals options in the Update Services console.D、Configure a Group Policy object (GPO) that enables client-side targeting.

考题 You are the network administrator for your company. The network consists of a single Active Directorydomain. All network servers run Windows Server 2003. Three thousand client computers run Windows 2000Professional, and 1,500 client computers run Windows XP Professional.A new employee named Peter is hired to assist you in installing Windows XP Professional on 150 new client computers.You need to ensure that Peter has only the minimum permissions required to add new computer accounts to the domain and to own the accounts that he creates. Peter must not be able to delete computer accounts.What should you do()?A、Add Peter's user account to the Server Operators group.B、Add Peter's user account to the Account Operators group.C、Use the Delegation of Control Wizard to permit Peter's user account to create new computer objects in the Computers container.D、Create a Group Policy object (GPO) and link it to the domain. Configure the GPO to permit Peter's user account to add client computers to the domain.

考题 ertkiller .com has purchased laptop computers that will be used to connect to a wireless network. You create a laptop organizational unit and create a Group Policy Object (GPO) and configure user profiles by utilizing the names of approved wireless networks. You link the GPO to the laptop organizational unit. The new laptop users complain to you that they cannot connect to a wireless network.  What should you do to enforce the group policy wireless settings to the laptop computers()A、Execute gpupdate/target:computer command at the command prompt on laptop computersB、Execute Add a network command and leave the SSID (service set identifier) blankC、Execute gpupdate/boot command at the command prompt on laptops computersD、Connect each laptop computer to a wired network and log off the laptop computer and then login again.E、None of the above

考题 单选题Ten years ago, smaller companies did not use large computers because _____.A these companies had not enough money to buy such expensive computersB these computers could not do the work that small computers can do todayC these computers did not come onto the marketD these companies did not need to use this new technology