Home

CISC vs RISC architectures

CISC vs RISC architectures

 

 

CISC vs RISC architectures

The debate over whether the RISC architecture or the CISC architecture is better has been going on for many years. Whether RISC architecture with its small but efficient instruction set or the CISC architecture with its large and easy to use instruction set is better has been hard to determine. In a time when new chips are released nearly every month, companies want to make sure they have the edge over the competition. They want their chips to be designed with speed in mind. Many chips have used either the Reduced Instruction Set Computer or the Complex Instruction Set Computer since the beginning of the computer era but whether one is better has never been a clear-cut issue. They both have strengths and weaknesses. We are going to discuss the benefits and disadvantages of each architecture and determine which is the better architecture.

Learning Outcomes:
On completion of this lecture, you will be able

  • Understand the requirements to consider in designing a computer instruction set;
  • Discuss the performances of CISC and RISC architectures.

27.1    Instruction set design

Chip designers want to increase the speed of their chips as much as possible. It is not easy to do this. The way a chips works can greatly affect the performance of the chip. Their three steps that every processor does, fetch the instruction, decode the instruction, and execute it. How this is done is different from one processor to the next. To increase the speed of the processor you can do two things. First, you can decrease the number of clock cycles. Clock cycles are a discrete time interval that computers use to determine when events take place in the hardware. Second, you can increase the clock speed. Clock speed is the length of a clock cycle which is usually measured in Mega Hertz. Therefore, when a programmer is designing an instruction set they want their instructions to use the fewest amount of clock cycles as possible and have the highest clock speed. This isn’t always easily obtainable.

27.2    Historical context

The CISC architecture was introduced in a time when compilers were inefficient and memory was very expensive. This time was called a “software crisis” because the price of software was skyrocketing whereas the price of hardware was spiraling. Researchers believed the best way to remedy this was to take the burden off of the software (compilers) and use hardware to do all the dirty work. They wanted to make the job easier for compilers by having more instructions that were like the high-level statements. High-level languages are called so because each instruction can correspond to numerous assembly instructions whereas a low-level language has a 1-1 correspondence to assembly instructions. Essentially they were making the assembly language look like high-level languages. An extreme case of this was the High-Level Language Computing Architecture. If all high-level language instructions translated to one assembly language instruction it would have many advantages. It would be easier to write compilers. Code would be smaller so it would save RAM. Code would be easier to debug.

The performance of an architecture is one of the main features of it. The formula for performance is

Time/program = instructions/program * cycles/instruction * time/cycle.

The CISC architecture increased performance by reducing the number of overall instructions. So reducing the overall number of instructions not only saves memory but also increases performance. Micro coding was one of the main ways that CISC was able to do multiple assembly instructions in one instruction. Micro coding is writing code that exists on the processor to handle complex instructions and to tell the processor what to do with them. The alternative of micro code is direct execution. With direct execution, the machine fetches an instruction from memory and feeds it into a hardwired control unit. This control unit takes the instruction as its input and activates some circuitry that carries out the task. The main advantage of direct execution is that it is fast. The downside of direct execution is that it requires many transistors to be able to do this. Micro code on the other hand uses a micro engine that fetches the instructions then runs the appropriate micro code subroutine. At first this was a very slow way of doing things. When advancements were made in micro code technology it greatly increased its speed. The problem with this is that for the micro code to be fast it had to very compact and efficient. This often made it hard to debug. This lead to many distributions that ended up having bugs.

27.3    Discussion

The designers of CISC wanted to save as much space in memory as possible. The processors could not handle having each instruction requiring only one clock cycle. The instructions would range from using 1 clock cycle to up to a 100 clock cycles [1]. For instance the mul instruction on the 8088 required 133 clocks to multiply two 16bit integers whereas the mov instruction only required 2 clocks[1]. Another thing about CISC is that instruction lengths can vary. This slows the decoding process but saves space in memory.

CISC were designed with the thought that assembly language programming was of the utmost importance. High-level languages were not very popular when CISC first came out. They wanted to make it easier for the user to program in assembly. When scientists analyzed instruction streams they realized that greatest amount of time was spent executing simple instructions and doing loads and stores. The compiler very rarely used the complex instructions that CISC used. The compiler would not be able to write efficient code when trying to use complex instructions. It also would not use them because they weren’t any high-level counterparts to them.

As chips became faster and more complex it became increasingly harder to create efficient architectures. The old CISC architectures were getting so complex that they were using more than one chip now. Designers wanted to simplify their designs by using only one chip for the CPU. Since chips have a limited amount of transistors they would have to get rid of some of the instructions. To do this they did extensive studies as to which instructions were executed the most and which were executed the least. They then optimized the instructions that were executed the most. They did not worry if the less used instruction executed slower. They were basically undoing what the designers of CISC started. Software was much better now so it was unnecessary for the processor to be doing these tasks. This was the beginning of the RISC architecture.

The first thing that the designers of RISC did was to throw out the micro code engine. They discovered that hardly anyone was using any of the more exotic instructions. Compiler writers thought they were too hard to implement. They instead used multiple simple instructions to do the same thing. So they decided to remove all these complex instructions. This is how the name reduced instruction set computer came to be. They also reduced the amount of clock cycles used to only 1 per instruction. They tried to make it so that each microinstruction in the CISC architecture corresponded to a new assembly instruction. They also did this so that pipelining would be easier. Pipelining is the practice of running multiple instructions during one clock cycle. Pipelining is only effective if the instructions do not vary in complexity. In order for the instructions to be pipelined they have to be able start and finish at the same times as each other. If done correctly this can greatly increase performance. In the performance equation the RISC architecture tries to reduce the number of clock cycles while having more instructions. This came out to be much faster than the CISC architecture’s idea.

The RISC architecture is most commonly known for having fewer instructions. It does this so that the chip would be less complex. The idea is to use less complex instructions multiple times. This usually gives a net gain in performance. The optimum RISC architecture would use one clock cycle for each instruction. To do this they would remove more complex instructions like mul, which multiplied two numbers, and make sure that the add instruction uses only one clock cycle. They can do this because a multiplication is just using multiple additions to compute the answer. RISC also has more registers. Since registers are the fastest memory in the processor it is logical to have as many as possible and still be efficient. Having more registers also makes it where you will have fewer instructions where you need to push stuff on the stack because you are out of registers. Register memory is the fastest memory but it is also the most costly and therefore there are cost limitations. RISC furthers this be never fetching from memory during an arithmetic instruction. It requires that you use load and store instructions to access memory. This is so that no one instruction will take too long. RISC instructions are all the same length so decoding time doesn’t take long. RISC also emphasizes floating-point performance. RISC architectures are also easier to design an efficient compiler. People mainly use high-level languages so it is more advantageous for RISC instruction sets to be a good compiler target than for assembly programming.

There are many techniques to increase the efficiency of compiling on RISC architecture. Pipelining is one of those techniques. Optimally you would want the processor to be executing an instruction while you load and store data. The compiler should reorder the instructions so it can do this. This is important because approximately 1/3 of instructions are loads and stores. Since loads take more than 1 cycle to complete, a technique called score boarding was devised. When something needs to be stored in a register it marks that register. Then it continues with instructions that do not access that register. Any instructions that require that register will see that it is marked and have to wait. Peephole execution is another method to reduce loads and stores. It removes redundant loads and unnecessary stores. For instance before a branch can take place it must first calculate the address. To remedy this the compiler can move the instruction up one so that it is calculating the address during the previous instruction. Inlining can also increase overall run time. Inlining is the practice of copying the code from procedures into the code itself. This is most efficient when using short procedures that are called multiple times. These kinds of optimizations can improve overall performance by 50% on a RISC machine but on a CISC machine you wont see near as much. This is mainly because RISC does not contain micro code, which can sometimes hide code sequences that can be optimized. One drawback to all these optimizations is that it can make it hard to debug the code. With all this data being moved around the debugger can sometimes have trouble finding data. This can be a disadvantage but can be offset by a good compiler. Optimally the compiler would leave all this information in the object file.

27.4    Conclusion

The future of RISC is changing everyday. With new technologies they have been able make many changes to the RISC architecture. They now have more registers. They have on chip caches that are clocked as fast as the processor. They added additional functional units for super scalar execution, additional non-RISC (but fast) instructions, on chip support for floating point operations, increased pipeline depth, and branch prediction, out of order execution, and on-chip support for SIMD operations.

Today’s modern day processors cannot be simply marked as CISC or RISC. Since modern processors are so advanced the constraints that brought about the different architectures no longer exist. The RISC architecture might have been the most efficient architecture a few years ago but it is quickly becoming outdated. So now, hybrids of CISC and RISC are prevalent. The debate of RISC vs. CISC is no longer applicable. You can’t really compare the two architectures because they were the best during their times and their times alone. CISC was the best when software expensive but later on prices dropped and RISC became much better.


REFERENCES

  • Look on the Web at http://www.hackerz.ir/e-books/Risc_vs_Cisc.htm

 

 

Source: http://www.mee.tcd.ie/~assambc/3d1_l27n.doc

Web site to visit: http://www.mee.tcd.ie

Author of the text: indicated on the source document of the above text

If you are the author of the text above and you not agree to share your knowledge for teaching, research, scholarship (for fair use as indicated in the United States copyrigh low) please send us an e-mail and we will remove your text quickly. Fair use is a limitation and exception to the exclusive right granted by copyright law to the author of a creative work. In United States copyright law, fair use is a doctrine that permits limited use of copyrighted material without acquiring permission from the rights holders. Examples of fair use include commentary, search engines, criticism, news reporting, research, teaching, library archiving and scholarship. It provides for the legal, unlicensed citation or incorporation of copyrighted material in another author's work under a four-factor balancing test. (source: http://en.wikipedia.org/wiki/Fair_use)

The information of medicine and health contained in the site are of a general nature and purpose which is purely informative and for this reason may not replace in any case, the council of a doctor or a qualified entity legally to the profession.

 

CISC vs RISC architectures

 

The texts are the property of their respective authors and we thank them for giving us the opportunity to share for free to students, teachers and users of the Web their texts will used only for illustrative educational and scientific purposes only.

All the information in our site are given for nonprofit educational purposes

 

CISC vs RISC architectures

 

 

Topics and Home
Contacts
Term of use, cookies e privacy

 

CISC vs RISC architectures