Atlantic Computer Case Study Essay

Atlantic Computer Case Study Essay.

Atlantic Computer developed a product, the “Atlantic Bundle”, to meet an emerging basic server market. The Atlantic Bundle is a Tronn server coupled with the Performance Enhancing Server Accelerator software tool “PESA”. Atlantic Computer must decide on the pricing strategy. Situational Analysis

The external analysis is as follows:

•Customers: The first customer identified has a primary need to host websites, “Web Server” customer. The second customer identified has a primary need for file servers that help layout designers share graphic, text, and layouts, “File Sharing” customer.

Customers in these segments appear to be the ones that will benefit the most from the PESA tool.

•Competition: The primary competition in the market is Ontario who claims 50% of revenue market share with the remainder of market comprised of many smaller venders (external threat, Appendix A). Ontario’s business model focuses on driving out non-value added costs and competing largely on price (value pricing). Its products are sold primarily through the internet.

•Collaborators: The server division relies upon a high-touch direct sales channel at a higher cost than that of online sales.

Sales reps receive 70% salary and 30% commission. •Context: The largest segment of the server industry is the high performance server segment. The segment is expected to demand approximately 200,000 units next year and is expected to grow at approximately 3% annually over the following two years. The basic server segment is a newer segment with strong forecasted growth of 36% (external opportunity, Appendix A). The segment will comprise approximately 20% of total units sold next year at 50,000 units. By the third year of the forecast, the basic server market will make up approximately 30% of total units sold.

The internal analysis is as follows:

•Company situation: Atlantic is a well-established company with over 30 years of experience in the server market. The company is known for providing top-notch, highly reliable products and high quality, responsive post-sales assistance (internal strength, Appendix A). Atlantic has focused on selling high-end performance servers to large enterprise customers. The “Atlantic Bundle” was developed to assist the company in emerging into the basic server segment. The product was created to produce a basic server without creating a substitute product to the high performance servers. However, the logic seems flawed as customers would not have viewed the basic server as a substitute to the high-performance servers (internal weakness, Appendix A). In the past, Atlantic’s sales force gave away software tools.

•Relative market/competitive position: Ontario’s Zink server performs at approximately the same level as Atlantic’s Tronn. Even without the built-in PESA R&D costs the Tronn was priced higher relative to the Zink. Hence, the target market was narrowed to include customers that require more than one server. PESA allows the Tronn to perform up to four times faster than its standard speed. The “Atlantic Bundle” will allow companies to reduce the number of basic servers they must purchase and reduce operating expenses such as electricity charges and software license fees. Mr. Matzer indicated the “Atlantic Bundle” is the sale they want.

•Results: The gains to customers from the PESA software tool were examined and it was found that the Web Server and File Sharing application segments will benefit the most from the tool. The conclusion was based on the benefit to customers of being able to purchase fewer servers and the resulting savings (internal strength, Appendix A).

•Challenges: The primary challenge will be to address whether Atlantic will be successful utilizing its commissioned sales force rather than online sales. Another problem arises in how to motivate the sales force and the training required to sell the “Atlantic Bundle”. Finally, software has historically been given away which appears to be the industry norm. Charging for software may alienate customers (internal weakness, Appendix A). Alternative Courses of Action

Free PESA Software with Purchase. Rather than regarding the PESA R&D as a sunk cost, I chose to distribute the costs to every server. The price under this route was determined to be $2,122 (see Appendix B). The primary drawback is that a customer who would have normally purchased the Tronn without the software would be charged a higher price ($2,122 vs. $2,000). Continuing with the tradition and norm of free software, staff would not have to be retrained and customers will not feel alienated. Furthermore, the one-bundle price could easily be transitioned to on-line sales, and the low price will increase market share. The “free” software could create an illusion of low perceived value. Finally, the lower price will result in lower profit margins, and it does not take into consideration the value advantage received by the customer.

Competition Based Pricing. The price under this route was determined to be $3,400 (see Appendix C). Under this route, the company will earn more profit per bundle sold. Additionally, minimal effort is required to determine the price. However, the competition based pricing creates indifference between the “Atlantic Bundle” and its competition. The higher price will also reduce market share and could stir a pricing war. Cost-Plus. The price under this route was determined to be $2,245 (see Appendix D). Atlantic would gain market share under this route as the price is low relative to the benefits the customer receives. Additionally, the pricing will remain the same for the next three years. This approach does not take into consideration the value advantage received by the customer. Also, it results in lower profit margins per bundle sold. Value-In-Use.

The price under this route was determined to be $3,200 (see Appendix E). The primary benefit is that the approach is customer focused. The price is justified and the customer will perceive higher value for the price. Higher margins will also be earned. However, Atlantic will lose market share under this route at the higher price. Additionally, staff would have to be extensively re-trained and motivated. Customers who primarily purchase online may be reluctant to sit through “We can save you money!” sales pitches.


The company should proceed with the free PESA software route. The primary benefit is that the company will be able to initiate online sales which will reduce training costs, salaries, and commissions and will make up for the lower profit margins earned. One primary drawback is a customer will be charged a higher price even if they do not require the PESA tool. However, the target market has been narrowed to include customers that require more than one server, because it is unlikely that a customer who requires only one server will purchase the Tronn over the Zink. The most likely response from Ontario is to lower the price of the Zink to remain competitive. At the low price of $2,122, Ontario would have to lower Zink’s price to less than half of the price of the Tronn to fight for market share from the target market. Finally, the lower price feeds into the market-penetration strategy to maximize market share. The issue of perceived low quality can be disregarded as customers have proven the low-cost strategy utilized by Ontario has not affected their opinions on quality.


The free PESA software will allow the company to compete on the same level as Ontario through price and online sales without having to retrain employees, stray from the general rule of providing free software, or introducing sales pitches to customers who will likely be reluctant to take part. The low, competitive price will support market penetration and favor Atlantic should Ontario reduce its prices.

Atlantic Computer Case Study Essay

Subject Scheduling System Essay

Subject Scheduling System Essay.

1) How many years you have been working in this institution as the one who finalizes subject schedules? Ans.: 9.25 years

2) During these years, have you encountered a subject scheduling system? _No__ a.) If yes, what particular program?

b.) If no, what is your current process in manipulating subject schedule? Ans.: Manual
3) How efficient is your current process in manipulating subject schedule? Ans.: Not efficient
4) What are the common problems you encountered in using this process? Ans.:
1.) Conflict of time
2.) Overloading
3.) Classroom utilization conflict
5) In your opinion, is there a need for us to create a system that will cater the needs in plotting the subjects systematically? _Yes__ a.

) If no, why not?

b.) If yes, what are the features you want to have on our system? a) The software should register the subjects, instructors, and rooms. b) The software should determine the teaching load of every faculty. c) The software should determine the room sitting capacity. d) The software should determine the subjects being handled by every instructor.

e) The software should determine the conflict of time, instructor, and room assignment. f) The software should determine whether the room is used for laboratory or lecture.

g) The software should determine the class size of every subject for room assignment purpose. h) The software should determine the number of units in every year level. i) The software should include the subject scheduling in major examinations with room assignment. j) The software should include the scheduling of class shortening (30 minutes divided by the number of meetings every week). k) The software should print the schedule of every section. l) The software should print the faculty load.

m) The inputs should be in a combo box format.
n) The software should balance the vacant time of every section. o) The software should be user friendly.

You may also be interested in the following: class scheduling system

Subject Scheduling System Essay

Automated Grading System Essay

Automated Grading System Essay.


Letter grades were first used in the United States in the last part of the 19th century. Both colleges and high schools began replacing other forms of assessment with letter and percentage grades in the early 20th century. While grading systems appear to be fairly standardized in the U.S., debates about grade inflation and the utility of grades for fostering student learning continue.

Automation has had a notable impact in a wide range of industries beyond manufacturing (where it began).

Once – telephone operators have been replaced largely by automated telephone switchboards and answering machines. Medical processes such as primary screening in electrocardiography or radiography and laboratory analysis of human genes, cells, and tissues are carried out at much greater speed and accuracy by automated systems. Even elections have gone automated. Applying automation to Grading systems wherein it will also make a task easy and accurate.

1.1 Background of the Study

The group’s system named “Automated Student Evaluation System” is effective on inputting and storing data.

And the excellence and efficiency of this system is assured. The group has taken this opportunity as a challenge and pushed our ideas into reality and has considered many aspects and ideas in making this one of a kind project. The group hopes that the readers of this documentation would be inspired as they and believe that the primary goal of grading and reporting is communication. Effective grading and reporting systems promote interaction and involvement among all stakeholders (i.e., students, parents, teachers, and administrators) in the educational process.

Grading promotes the attainment of defined, content-specific learning goals and identifies where additional work is needed when it is directly aligned to the curricula. Grades serve a variety of administrative purposes when determining suitability for promotion to the next level, credits for graduation and class rank.

Computerized grading makes the grading process fast, more consistent, and more reliable than traditional manual grading. With the use of today’s advanced computers and other technologies in academic industries, the technologies will not just help the establishment but also everything that covers it; from the Professors to the students. Using the new programming languages that are present today the proponents will use this technology to help the school enhance its system. But despite of having a great system there are still some point in it that needed to be replaced or enhanced.

1.2 Statement of the Problem

The old system was using Microsoft Excel only for inputting and storing the grades, the grades can only be accessed in one computer and also has a chance of data loss or the files are not secured enough.

Many things in this school have gone from manual to automate. The group noticed that this system does not exist in this school. We all know that making an Automated Student Evaluation System means making the task for professors in computing and calculating grades will become easy and not only that the professors will have an advantage but the students will also get a gain, because it will also improve accuracy of calculations thus making the what we call “Hula of Grades” will become non – existent in the future of Sta.

Cecilia College.

1.3 Statement of Objectives

The system aims to lessen the time in searching student’s records and processing of grades and to provide accurate facts to lessen errors. One of the tangible benefits of this system is cost reduction and avoidance due to facts searching of students records and processing of grades.

1.3.1 General Objective

The system’s to improve more accurate information for reduction of errors. By simply exploring of student grades information needed. It will increase flexibility because this is completely packed with adequate information for grade of the students. And also to secure the students grades.

1.3.2 Specific Objectives

This study aims to:
❖ The grades will be properly arranged and organized.

❖ It will speed up the activity of grade transactions of students.

❖ It will lessen the time they consume that will promote good aspects of the school through excellent service.

Nowadays, and other computerized applications for the improvement of their services. It is a necessity for this institution to follow what is in today’s world, Perhaps it is a necessity to change its image from a low technology school to a high standard facility equipped. Sta. Cecilia College offers computer courses and having a system like this will promote better learning for students because they could have interest to learn in database handling, programming and system analysis.

1.4 Significance of the Study

Getting involved in this kind of study is important to be aware of the modernization of technology particularly in computer system that can be useful at present and for the future use which is necessary in order to keep track with advanced technology being in the global technology competition. The proposed study would also assist to develop the proponent’s skills, especially in terms of system analysis, system design and programming. This study will create an Automated Student Evaluation System with student information system. This will also help the company to cope up with the long work flow of their previous system.

1.5 Scopes and Limitations


❖ The system can perform specific task as inputting the grades of the student and convert it to its equivalent. ❖ The system can also hold the information of the students, adding, editing and saving it to the database. ❖ Calculates individual student grades


❖ The grades can only be accessed by Professors by the use of the log in user module. ❖ The Registrar can only access and modify the Student information.

Student evaluation is a very complex process that should take many factors into account. Recognizing the limits of various grading practices and balancing them with common sense and good judgment is an important part of the work of professional teachers.

Automated Grading System Essay

Antivuris Programs Essay

Antivuris Programs Essay.

Today, people rely on computers to create, store, and manage critical information, many times via a home computer network. Information transmitted over networks has a higher degree of security risk than information kept in a user’s home or company premises. Thus, it is crucial that they take measures to protect their computers and data from loss, damage, and misuse resulting from computer security risks. Antivirus programs are an effective way to protect a computer against viruses.

An antivirus program protects a computer against viruses by identifying and removing any computer virus found in memory, on storage media, or on incoming files.

When you purchase a new computer, it often includes antivirus software. Antivirus programs work by scanning for programs that attempt to modify the boot program, the operating system, and other programs that normally are read from but not modified. In addition, many antivirus programs automatically scan files downloaded from the Web, e-mail attachments, opened files, and all types of removable media inserted in the computer (Karanos 201-205).

One Technique that antivirus programs use to identify a virus is to look for virus signatures, or virus definitions, which are known specific patterns of virus code. According to Shelly and Cashman (Antivirus Programs), many vendors of antivirus programs allow registered users to update virus signature files automatically from the Web at no cost for a specified time. Updating the Antivirus program’s signature files regularly is important, because it will download any new virus definitions that have been added since the last update. Methods that guarantee a computer or network is safe from computer viruses simply do not exist. Installing, updating, and using an antivirus program, though, is an effective technique to safeguard your computer from loss.

Antivuris Programs Essay

Antivirus Programs: Methods and Benefits Essay

Antivirus Programs: Methods and Benefits Essay.

Today, people rely on to create, store, and manage critical information, many times via a home computer network information transmited over networks has a higher degree of security risk than information kept in a user’s home or company premises. Thus, it is crucial that they take measures to protect their computers and data from loss, damage, and misuse resulting from computer security risks. Antiirus program are an effective way to protect a computer against viruses.

An antivirus program protects a computer against viruses by identifying and removing any computer viruses found in memory, ontorage media, or on incoming files.

! When you purchase a new computer, it often includes antivirus software . antivirus program work by scanning for programs that attempt to modify the boot program, the operating system, and other programs that normally are includes antivirus software . antivirus program work by scanning for programs that attempt to modify the boot program, the operating system, and other programs that normally are read from but not modified.

In addition, many antivirus programs automatically scan files (Bulowski) (Bulowski) (Bulowski, Protection and Precaution Keeping Your computer Healthy, 2008)download from the web, e-mail attachment, opended files, and all types of remoble media inserted in the computer (karanos 201 – 205)

One technique that antivirus programs use indentify a virus is to look for virus signatures, or virus definitions, which are known specific patterns of virus code. According to Shelly and Cashman (Antivirus Programs), many vendors of antivirus programs allow registered users to update virus signarure files automatically from the Web at no cost for a specified time.

Updating the antivirus.

Bullowski points out that most antivirus also protect against worms and Trojan horse (55-61).Program’s signature files regularly is important, because it will download any new virus definitinion that have been added since the last update.

Methods that guarantee a computer or network is safe from computer viruses simply do not exist. Installing updating and using an antivirus program, though, is an effective techniques to safeguard your computer from loss.

Antivirus Programs: Methods and Benefits Essay

Computer Revolution Essay

Computer Revolution Essay.

Personal Computers

The personal computer revolution was a phenomenon of immense importance in the 1980s. What the average American commonly refers to as a PC, or personal computer, did not even exist before the 1970s. Mainframe computers had been the norm, and they were primarily relegated to business and scientific use. With the dawn of the personal computer all Americans were allowed potential access to computers. As competition and modernization increased, issues of cost became less and less of an inhibitor, and it appeared that a new technological “populism” had developed.

Companies such as Apple Computer became household names, and words such as software and downloading became commonplace. It was predicted that by 1990, 60 percent of all the jobs in the United States would require familiarity with computers. Already by 1985, some 2 million Americans were using personal computers to perform various tasks in the office. The impact of the personal computer to the average American has been enormous—in addition to its usefulness at the office, it has become a source of entertainment, culture, and education.


Founded in 1976 by Steven Jobs and Stephen Wozniak, Apple Computer was to be the spearhead of the personal computer revolution. Apple had achieved moderate success in the late 1970s, but in the 1980s the company developed its innovative vision of how computers could relate to the average person. By 1982 Apple became the first personal computer company to have an annual sales total of $1 billion. In 1983 Apple introduced the Lisa. Lisa was to be the successor of the Apple II and was the first computer to widely introduce the concept of windows, menus, icons, and a mouse to the mainstream. The Lisa computer was phased out by 1985 and sur-passed by the Macintosh in 1984. Macintosh was faster, smaller, and less costly than the Lisa; it retailed for around $2,500 and was packaged as a user-friendly machine that was economical enough to be in every home. Although the machine possessed less processing capability than IBM PCs, one did not need any programming capability to run the machine effectively, and it became popular.

Beyond Simplicity

Not satisfied to be simply “the easy PC,” Apple in 1986 introduced the Mac Plus, PageMaker, and the LaserWriter. The infusion of these three, particularly PageMaker, an easy-to-use graphics page-layout program, helped give rise to a new medium known as desktop publishing. Creating this new niche made Macintosh the premier, efficient publishing computer. Apple expanded its hold on the graphics market in 1987 with the introduction of the Mac II computer. Its color graphic capability fostered the introduction of color printers capable of reproducing the color images on the computer screen. By 1988 Apple introduced Macs capable of reading DOS and OS/2 disks, thereby closing some of the separation between Macintosh and IBM PCs.


On 12 August 1981 International Business Machines (IBM) created its first personal computer. Simply called the IBM PC, it became the definition for the personal computer. IBM was the largest of the three giant computer firms in the world, and the other two, Hewlett-Packard (HP) and Xerox, had previously attempted to make efforts into the new PC market but failed. IBM initially was not convinced that the American public was interested in computers, particularly for their own home usage, but after viewing the early successes of Apple they were determined to enter the race. In creating the software for the PC, IBM turned to a young company called Microsoft to formulate MS-DOS.

Market Success

IBM PCs were immensely powerful, fast machines, and their entrance into the market legitimized the personal computer and created a new cottage industry. In 1983 IBM introduced the PCjr, a less expensive version of the PC. Despite strong advertisement PCjr was not a success and cost IBM quite a bit in reputation and money. Undiscouraged by these results, IBM pressed onward. By the mid 1980s, IBM PCs had inspired many clones that emulated IBM’s functions at a lower cost to consumers. Constantly setting the standard, IBM in 1987 introduced the PS/2 and the OS/2, the first IBM 386 models. IBM also established agreements with software companies such as Lotus to develop sophisticated programming for their company. Attempts were also made by the company to launch a line of portable computers over the decade. The success of these various portable models was somewhat limited, due to size and cost, as well as improper promotion. Even with several marketing setbacks throughout the decade, however, IBM remained the largest computer firm in the world. By 1989 IBM was producing personal computers that dwarfed earlier models in speed, capability, and technology.


As the personal computer explosion continued to grow, it spawned more and more cottage industries. One of the largest new markets to develop was that of the software industry, and one of the largest companies in that industry was Microsoft, founded in 1975 by William Gates and Paul Allen in Redmond, Washington. In 1981 Microsoft created MS-DOS, short for Microsoft Disk Operating System. Although it was initially licensed only to the IBM Corporation, by the end of the decade it became the industry-standard operating software for all PCs. The ability to corner this lavish, fast-growing market solidified Microsoft’s software leadership position in the 1980s. Microsoft also began work late in the decade on Windows and OS/2 software programs for PCs and introduced programs for Apple Computer. Another growing software company was Lotus Development Corporation, who created its innovative 1—2-3 spreadsheet programs. Desktop publishing software was advanced greatly thanks to the growth of Apple Computer’s graphics capabilities. Countless other software programs, from playful (video games) to statistical (accounting programs), began to saturate the market, attempting to feed the growing desires of the American public.

Information Society

Computers have touched most aspects of how Americans function. Through their ability to link groups across great distances, they have made the world, at least theoretically, a smaller place. The computer was not the first technological advancement to impact the nation so greatly, but the speed in which it swept across the country and the pace in which change within the field continues to occur have been remarkable. As technology advanced, the cost of computers also significantly declined. Schools on all levels began to integrate computer literacy into their academic programs as it was seen that this knowledge would be as essential as reading in the next century. Sales for computer companies sky-rocketed as they rushed to meet demand. Computer magazines, such as Byte, PC World, and PC Magazine were either born in the 1980s or grew substantially as interest around the issue grew. Backlash regarding the growth of computers and their infiltration into society also occurred. Fear of an unfeeling technical society where human contact has been replaced by machines has been voiced by some extreme critics. On the more moderate side are criticisms that computer technology will only improve the lives of those who could afford the high costs of a PC. Thus, the computer, instead of unifying, could potentially increase the gap between the rich and the poor.

Machine of the Year

In 1983 Time magazine solidified the personal computer’s arrival into mainstream society when it named the PC its 1982 Machine of the Year. Time’s Man of the Year award was given to a prestigious man or woman that had made a significant mark on the world in the preceding year; by adapting the honor for a machine, Time acknowledged the immense contribution this technology had made upon society. Computers, once available only to trained programmers, now became increasingly commonplace in homes across the country. They changed the way the average American received and processed information at work and at home. Some critics scoffed at the fact that the magazine had bestowed a machine with such an important title, but Time defended the decision, stating, “There are some occasions, though, when the most significant force in a year’s news is not a single individual but a process, and a widespread recognition by a whole society that this process is changing the course of all other processes. That is why, after weighting the ebb and flow of events around the world, Time has decided that 1982 is the year of the computer.”

Computer Revolution Essay

Paper Critique: “Airavat: Security and Privacy for Mapreduce” Essay

Paper Critique: “Airavat: Security and Privacy for Mapreduce” Essay.

1. (10%) State the problem the paper is trying to solve.

This paper is trying to demonstrate how Airavat, a MapReduce-based system for distributed computations provides end-to-end confidentiality, integrity, and privacy guarantees using a combination of mandatory access control and differential privacy which provides security and privacy guarantees against data leakage.

2. (20%) State the main contribution of the paper:  solving a new problem, proposing a new algorithm, or presenting a new evaluation (analysis). If a new problem, why was the problem important? Is the problem still important today? Will the problem be important tomorrow? If a new algorithm or new evaluation (analysis), what are the improvements over previous algorithms or evaluations? How do they come up with the new algorithm or evaluation?

The main contribution of the paper is that Airavat builds on mandatory access control (MAC) and differential privacy to ensure untrusted MapReduce computations on sensitive data do not leak private information and provide confidentiality, integrity, and privacy guarantees.

The goal is to prevent malicious computation providers from violating privacy policies a data provider imposes on the data to prevent leaking information about individual items in the data.

The system is implemented as a modification to MapReduce and the Java virtual machine, and runs on top of SELinux

3. (15%) Summarize the (at most) 3 key main ideas (each in 1 sentence.)

(1) First work to add MAC and differential privacy to mapreduce. (2) Proposes a new framework for privacy preserving mapreduce computations. (3) Confines untrusted code.

4. (30%) Critique the main contribution
a. Rate the significance of the paper on a scale of 5 (breakthrough), 4 (significant contribution), 3 (modest contribution), 2 (incremental contribution), 1 (no contribution or negative contribution). Explain your rating in a sentence or two.

This system provides security and privacy guarantees for distributed computations on sensitive data at the ends. However, the data still can be leaked in the cloud. Because multiple machines are involved in the computation and malicious worker can sent the intermediate data to the outside system, which threatens the privacy of the input data. Even not to this extent, temporary data is stored in the workers and those data can be fetched even after computation is done.

b. Rate how convincing the methodology is: how do the authors justify the solution approach or evaluation? Do the authors use arguments, analyses, experiments, simulations, or a combination of them? Do the claims and conclusions follow from the arguments, analyses or experiments? Are the assumptions realistic (at the time of the research)? Are the assumptions still valid today? Are the experiments well designed? Are there different experiments that would be more convincing? Are there other alternatives the authors should have considered? (And, of course, is the paper free of methodological errors.)

As the author’s stated on page 3 “We aim to prevent malicious computation providers from violating the privacy policy of the data provider(s) by leaking information about individual data items.” They use differential privacy mechanism to ensure this. One interesting solution to data leakage is that they have the mapper specify a range of its keys. It seems like that the larger your data set is, the more privacy you have because a user affects less of the output, if removed. They showed results that were really close to 100% with the added noise, it seems this is viable solution to protect the privacy of your data input

c. What is the most important limitation of the approach?

As the authors mention, one computation provider could exhaust this budget on a dataset for all other computation providers and use more than its fair share. While there is some estimation of effective parameters, there are a large number of parameters that must be set for Airavat to work properly. This increases the probability of misconfigurations or configurations that might severely limit the computations that can be performed on the data.

5. (15%) What lessons should researchers and builders take away from this work. What (if any) questions does this work leave open?

The current implementation of Airavat supports both trusted and untrusted Mappers, but Reducers must be trusted and they also modified the JVM to make mappers independent (using invocation numbers to identify current and previous mappers). They also modified the reducer to provide differential privacy. From the data provider’s perspective they must provide several privacy parameters like- privacy group and privacy budget.

6. (10%) Propose your improvement on the same problem.

I have no suggested improvements.

Paper Critique: “Airavat: Security and Privacy for Mapreduce” Essay

Secondary Storage Devices Essay

Secondary Storage Devices Essay.

As we all know that the main memory stores the data in a temporary manner which means all the data will be lost when the power goes off. To keep our data safe we use secondary storage devices. These are used for storing the data in a permanent manner so that all the data will remain stored whether the power is switched on or switched off, the power will never affect the system. For storing the data in a permanent manner we use the magnetic storage devices.

Advantages Of Secondary Storage Devices

1) Non-Volatile Storage Devices: The non-volatile storage devices are non-volatile in nature which means they never lose their data when the power goes off. Thus, the data which is stored in the non-volatile storage device will never be lost when the power is switched off.

2) Mass Storage: The capacity of these devices are very high which means we can store huge amount of data in secondary storage devices. We can store data in secondary storage devices in form of Giga bytes and Tetra bytes.

3) Cost Effective: The cost of these secondary storage devices are lower as compared to the main memory. This makes them more cost effective. Moreover they don’t even get damaged easily, the data can’t be lost from them.

4) Reusability: Memory contains the data in both Temporary and Permanent manner. The secondary storage devices are always reusable. The data they contain can be easily edited or removed as per our requirements. It’s re-writable and we can also add data to it. We can also store that data in our computers or laptops.

5) Portable: The secondary storage devices are portable, they are small and can easily be carried anywhere. They don’t require much space.

Secondary Storage Devices Essay

Motherboard Essay

Motherboard Essay.

Before generation of Microprocessors i.e. in 1st, 2nd and 3rd generation computers, the computer was usually built in a card-cage case or mainframe with components connected by a backplane consisting of a set of slots themselves connected with wires; in very old designs the wires were discrete connections between card connector pins. But printed circuit boards soon became the standard practice in the late 1970s. The Central Processing Unit, memory and peripherals were housed on individual printed circuit boards which plugged into the backplane.

(A backplane is a circuit board that connects several connectors in parallel to each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus.)

During the late 1980s and 1990s, it was found that increasing the number of peripheral functions on the PCB was very economical. Hence, single Integrated Circuits (ICs), capable of supporting low-speed peripherals like serial ports, mouse, keyboards, etc.

, were included on the motherboards. By the late 1990s, motherboards began to have full range of audio, video, storage and networking functions on them. Higher end systems for 3D gaming and graphic cards were also included later.

Micronics, Mylex, AMI, DTK, Orchid Technology, Elitegroup, etc. were few companies that were early pioneers in the field of motherboard manufacturing but, companies like Apple and IBM soon took over.

Today, motherboards typically boast a wide variety of built-in features, and they directly affect a computer’s capabilities and potential for upgrades.

Today Intel and Asus are the two leading companies in the field of motherboard manufacturing.

A typical desktop computer has its microprocessor, main memory, and other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables, although in modern computers it is increasingly common to integrate some of these peripherals into the motherboard itself. Few things that a motherboard nowadays include are:

• sockets (or slots) in which one or more microprocessors may be installed. • slots into which the system’s main memory is to be installed (typically in the form of DIMM modules containing DRAM chips). • a chipset which forms an interface between the CPU’s front-side bus, main memory, and peripheral buses. • non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system’s firmware or BIOS. • a clock generator which produces the system clock signal to synchronize the various components. • slots for expansion cards (these interface to the system via the buses supported by the chipset).

• power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards. • Additionally, nearly all motherboards include logic and connectors to support commonly used input devices, such as PS/2 connectors for a mouse and keyboard. Occasionally video interface hardware is also integrated into the motherboard. Additional peripherals such as disk controllers and serial ports are provided as expansion cards. • Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat. [pic]

CPU Sockets

• A CPU socket or slot is an electrical component that attaches to a printed circuit board (PCB) and is designed to house a CPU (also called a microprocessor). • It is a special type of integrated circuit socket designed for very high pin counts. A CPU socket provides many functions, including a physical structure to support the CPU, support for a heat sink, facilitating replacement (as well as reducing cost), and most importantly, forming an electrical interface both with the CPU and the PCB. • CPU sockets can most often be found in most desktop and server computers (laptops typically use surface mount CPUs), particularly those based on the Intel x86 architecture on the motherboard. A CPU socket type and motherboard chipset must support the CPU series and speed.

Integrated Peripherals

• It is possible to include support for many peripherals on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be reduced; highly integrated motherboards are thus especially popular in small form factor and budget computers.

Peripheral Card Slots

• A standard ATX motherboard will typically have one PCI-E 16x connection for a graphics card, two conventional PCI slots for various expansion cards, and one PCI-E 1x. A standard EATX motherboard will have one PCI-E 16x connection for a graphics card, and a varying number of PCI and PCI-E 1x slots. It can sometimes also have a PCI-E 4x slot. • Some motherboards have two PCI-E 16x slots, to allow more than 2 monitors without special hardware, or use a special graphics technology called SLI (for Nvidia) and Crossfire (for ATI). These allow 2 graphics cards to be linked together, to allow better performance in intensive graphical computing tasks, such as gaming and video editing.

• Virtually all motherboards come with at least four USB ports on the rear, with at least 2 connections on the board internally for wiring additional front ports that may be built into the computer’s case. • Ethernet is also included. Ethernet is a standard networking cable for connecting the computer to a network or a modem. • A sound chip is always included on the motherboard, to allow sound output without the need for any extra components. This allows computers to be far more multimedia-based than before. Some motherboards contain video outputs on the back panel for integrated graphics solutions.

Computer Cooling

• Motherboards are generally air cooled with heat sinks often mounted on larger chips, such as the Northbridge, in modern motherboards. If the motherboard is not cooled properly, it can cause the computer to crash. • Passive cooling, or a single fan mounted on the power supply, was sufficient for many desktop computer CPUs until the late 1990s; since then, most have required CPU fans mounted on their heat sinks, due to rising clock speeds and power consumption. Most motherboards have connectors for additional case fans as well. • Newer motherboards have integrated temperature sensors to detect motherboard and CPU temperatures, and controllable fan connectors which the BIOS or operating system can use to regulate fan speed. Some computers use a water-cooling system instead of many fans.

Bus & Bus Speed

• A bus is simply a circuit that connects one part of the motherboard to another. The more data a bus can handle at one time, the faster it allows information to travel. The speed of the bus, measured in megahertz (MHz), refers to how much data can move across the bus simultaneously. • Bus speed usually refers to the speed of the front side bus (FSB), which connects the CPU to the northbridge. FSB speeds can range from 66 MHz to over 800 MHz. Since the CPU reaches the memory controller though the northbridge, FSB speed can dramatically affect a computer’s performance. [pic]


• The speed of the chipset and busses controls how quickly it can communicate with other parts of the computer. The speed of the RAM connection directly controls how fast the computer can access instructions and data, and therefore has a big effect on system performance. A fast processor with slow RAM is going nowhere. • The amount of memory available also controls how much data the computer can have readily available. RAM makes up the bulk of a computer’s memory. The general rule of thumb is the more RAM the computer has, the better. • Much of the memory available today is dual data rate (DDR) memory. This means that the memory can transmit data twice per cycle instead of once, which makes the memory faster. Also, most motherboards have space for multiple memory chips, and on newer motherboards, they often connect to the northbridge via a dual bus instead of a single bus. This further reduces the amount of time it takes for the processor to get information from the memory. • A motherboard’s memory slots directly affect what kind and how much memory is supported. Just like other components, the memory plugs into the slot via a series of pins. The memory module must have the right number of pins to fit into the slot on the motherboard.

Form factor

• Motherboards are produced in a variety of sizes and shapes called computer form factor, some of which are specific to individual computer manufacturers. • The current desktop PC form factor of choice is ATX. A case’s motherboard and PSU form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases. For example, an ATX case will usually accommodate a microATX motherboard. • Laptop computers generally use highly integrated, miniaturized and customized motherboards. This is one of the reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop component requires the replacement of the entire motherboard, which is usually more expensive than a desktop motherboard due to the large number of integrated components.

Motherboard Essay

Can Computer Replace Human Beings Essay

Can Computer Replace Human Beings Essay.

Many of us think that computers are many times faster, more powerful and more capable when compared to humans simply because they can perform calculations thousands of time faster, workout logical computations without error and store memory at incredible speeds with flawless accuracy.

Human Brain:

We can only estimate the processing power of the average human brain as there is no way to measure it quantitatively as of yet. If the theory of taking nerve volume to be proportional to processing power is true we then, may have a correct estimate of the human brain’s processing power.

* by simple calculation, we can estimate the processing power of a average brain to be about 100 million MIPS (Million computer Instructions Per Second ). In case you’re wondering how much speed that is, let us give you an idea. * 1999’s fastest PC processor chip on the market was a 700 MHz pentium that did 4200 MIPS. By simple calculation, we can see that we would need at least 24,000 of these processors in a system to match up to the total speed of the brain !!

Computers have brought a revolution in human life.

To begin with, computers took over different human activities. Now even thinking and problem-solving are being done by computer

The situation makes many of us believe that computers are likely to replace human beings in every walk of life. But every coin has two sides. However useful they may be, computers cannot replace human beings. Human life is not a mechanical affair. A pearl like tear silently rolls down the cheek at the memory of the loved one. A compliment by an elderly person restores the confidence of a depressed person. Computers have intelligence and think like human beings? Will computer be superior to us and replace us in the future? On hearing the questions, many people may think that it’s impossible that computer will be superior to human. Computer is made by us; it’s only a machine, a tool. It cannot have feelings

But a lot of facts make us surprised. Let us see what is going on in detail. A chess-playing computer can defeat the world chess champion in 1997. Nowadays, artificial intelligence has got a significant development. Computer can understand our language and accept the oral command. Computer can already do a lot of tasks and they are learning to do other new tasks one by one.

In some fields, computer works more efficient than human indeed. However, I think, we should catch the key point: computer is always doing the things that we told them how to do. We admit that if we tell computer how to do the work, it can do it and sometimes it can do better than us because computer has greater ability to deal with some special kind of problems and it will not be tired.

Computer cannot solve the new problems that it has never met The human’s development process is always raising problems and solving them again and again, these attributes cannot be possessed by the computer. , though computer can act like human, it is still a computer; it doesn’t have feelings or free will.

We have feelings, we will be happy or unhappy, we will be ashamed when we do something wrong. We have soul and we are alive. We have free will to decide what to do. Can computer have feelings? It can’t. It has no will, what it is doing is only executing the programs made by human. I don’t think a computer can ever be replaced with a human, because it doesn’t have the same physical needs that we have. But I want a computer to interface with me almost like a human. At least on the interface side to be polite like a human, to understand my human needs. I want it to serve me and understand me as a human.

However, I expect a computer to be better than a human in many ways, such as keeping track of time. I expect a computer to know what five minutes is. I expect a computer to be reliable. I expect a computer’s memory to be perfect. I expect a computer to do all the things that computers do well. Record video information – humans can’t do that – record audio information, do text-to-speech, keep an accurate and perfect record of time and what happens in time. All the things that a computer is flawless at and can do well.

I want the computer to help me. Help me augment my memory, so that when I go to the doctor and they say “What did you have for breakfast?” it could show me, “This is what you had for breakfast, I took a picture of it.” Because that’s what a computer can do for you. But I want it to understand that that’s what I need, that’s what I want. In order for the computer to understand what I need and what I want, it has to understand my emotional reactions to things, so that it can learn what it is that I need and want.

A simple touch of mother silences a crying baby. Can a computer perform these and many such other miracles? Nowadays, teaching is being done by computers. Computer-lovers claim that the can learn with the help of a computer. Computers also administer tests, declare results and award certificates. But imagine the difference between the two situations, i.e., sitting before a computer and sitting in a class room with dozens of students around us and in the presence of a teacher.

The pains and pleasures of companionship, the repudiating as well as encouraging expressions on the teachers face, the direct interaction, eye contact, spontaneous smiles and abundant sharing and understanding set this living situation a world apart from the lonely, computer-controlled suffocating room.

You may also be interested in the following: can machine replace human essay

Can Computer Replace Human Beings Essay