{ "/work/yuxiang1234/prompting-whisper/audios-1/FZpE0dCKu9g.mp3": { "gt": "Welcome to lesson 21 of the course Industrial Automation and Control, in this lesson we are going to learn A Structured Design Approach to Sequence Control. So far we have mainly seen the programming constructs have seen small program segments, timers, counters. In this lesson for the first time, we will see that given a practical problem, how to study the problem, what are the steps that you go through to finally arrive at an RLL program. So, and this will be followed, using a very systematic approach, because as I have already told you that industrial control applications are very critical. In the sense that, if you have programming errors in them, they can be very expensive in terms of money or in terms of even can cost human lives etcetera. So, it is always good to have a very systematic design process by which you can decompose a problem and then finally, arrive at a solution. So, we will look at the instructional objectives, the instructional objectives of this lesson are firstly able to model simple sequence control applications using state machines. State machine is a actually formal method and we advocate the use of formal methods, because English can be very ambiguous sometimes contradictory also. So, we have to model it using methods, which are unambiguous consistent do not contain contradictions and are also easy to understand and develop. Then, from these formal models, we have to develop RLL programs, for such applications. And for doing this, there are certain apart from the RLL programs, there are some modern programming construct, which have been made available. One of them is a SFC or the sequential function chart, so we will take a look at that and also understand some of it is advantages, so these are the instructional objectives of this lesson. So, now let us go through the steps in basic broad steps in sequence control design, so first step is to study the system behavior. This is a very critical step and most of the errors that happen in any programming exercise, not only this kind of industrial automation programming, any programming. Mainly arises from the fact, that the programmer or the developer did not understand the system well. So, this is a very important step and one must, first of all identify inputs to this to the system, that is the controller program, what inputs it will take, inputs can come from either from sensors in the field or it comes from operator interface, which I call the MMI or the Man Machine Interface. So, somebody presses a push button; that is an operator input. On the other hand some limit is which is made, that is a sensor input. Similarly, identify the outputs, so switch on motor, so motor is an actuator; that is a kind of output. There is another kind of output, for example, switches on some indicator or some lamp that would be an output, which again goes to the MMI or the Man Machine Interface, so we have to first identify these. Then, study the sequence of actions and events, under the various operational modes, this is the main task. You have to very carefully understand, what is going to happen and what will happen after what, at what time intervals, etcetera. Then, one thing that must be very clearly remembered is that, when you are developing an industrial automation program not only has to remember, not only has to design for normal behavior. But, one must to some extend at least take into account the possible failure, that can occur. Otherwise, a system that behaves well under normal behavior can behave in a very nasty manner, if some simple element of the system, like a sensor fails. Then, even apart from the automated behavior, one has to examine the requirement; that exists for number one manual control. Manual control is very important, because for finally, if the automation equipment fails, it should be able to operate the system using manual control may be on the field. While the automated control may be actually working quite a distance away from the actual equipment, it may be housed in some control room. On the other hand, the manual controls may be near the equipment at the field. So, the possibility of including such manual controls must be examined, whether some additional sensors are required, some sensors may be there, but to achieve a kind of functionality, some other sensors may be needed. Indicators alarms and as well as operational efficiency or safety, these are the factors, which must be considered to finally arrive at the functionality. One must always remember that the customer may not always be able to express his or her needs and a good automation engineer should be able to supplement it, with his own experience in such cases. So, having done that, the next step is to convert generally these things are captured manually and using a linguistic description something like you know something like English statement. So, you talk to costumers, talk to engineers on the field and get their requirements, but this is very dangerous to use for program development. So, we have to convert this linguistic description into formal process models. And in fact, a lot of you know inconsistencies, which are there ambiguities, which are there in the linguistic description, actually surface at this time. Even, for during the process of transforming it into a formal process model 1; may initially use intermediate forms you know like flow charts for example. But, finally it is prescribed that, one should be able to convert it to a formal mathematical frame work, something like, let us say a finite state machine, which we will be using here. After, up to this the operations are manual; having done that then one has to go for design of the sequence control logic, based on the formal model. And then finally, one has to implement the control logic in the form of an RLL program and it is preferable, that these steps, especially the step D is made as much as much as possible automatic. Because, this is a step which can be done in an automated manner, once b and c have been carried out and for large programs, it is always preferable to go for automated programming. Because, that will always lead to error free programs, provided your specifications were correct. So, we come back to our old stamping process example, which we have seen in earlier lectures. So, here is a stamping process we know this process, so we have made some addition to it is functionality to be able to explain, you know certain features of a system and make it more complete. So, basic principle is the same, that there is a piston, which and there are two solenoids, hydraulically driven piston, which goes up and down and makes stampings. So, if we know try to write it is list of actions, try to create a linguistic description of the process operation, it looks something like this. So, in step A, it says that, if auto push button is pressed, so that is an operator input it turns powers and lights on. So, that is possibly a switch once, one or more switches with we will consider it to be one, which will turn the power and the light on one, I mean the moment the auto button is pressed. When a part is detected, the thing to be stamped, when it is detected, it placed at the proper place and detected, so there has to be a part sensor. The press rams the thing that will be heavy piece, which will move and make a stamping, it advances down and it will stop once, it makes the bottom limit switch. So, that is again a sensor and you must have actuators to make the RAM move down. Then, the press retracts up to the top limit switch and stops, so it makes a stamping and stops. On the other hand, due to some reason, the operator may be like to abort a stamping operation. So, there is a stop push button provided to the operator and a and a stop push buttons, stops the press, only when it is going down, when it is going up, it has no effect, because anyway, that is not going to cause any problem. If the stop push button has been pressed, it means that something abnormal might have happened. So, the reset push button must be pressed, before the auto push button can be pressed for the next cycle of operation. So, once you have pressed up, you have to press reset, you know it is a kind of acknowledgment, that the emergency has gone away and the automated operation can resume. Finally, after retracting the after retracting and then going up the press wait is till the part is removed and the next part is detected. So, till the part will be removed and then after that, when the next part will be detected, again the RAM will start coming down, so this is the English behavior of the system. So, now let us try to convert it to an unambiguous mathematical description, so first step as I said is to get the process inputs and outputs. So, what are the process, so here are the process inputs and outputs, so as inputs, we have part sensors, which gives two kinds of it generates two kinds of events, one is part placed, another is part removed. Then, there are three kinds of push button, the auto push button, the stop push button and the reset push button, these three are operator inputs. Then there are the two limits are switch sensors, bottom limit switch and top limit switch. For outputs, we have four outputs, we have an up solenoid, which moves the RAM up and we have a down solenoid, which moves it down. We have the power light switch and we have a part holder which holds the parts, while it is being stamped. So, these are our outputs. Now, we develop a state machine, so let us try to interpret this diagram. So, what is happening in this diagram is that, let me select my pen, so what is happening is that, you see these squares are the states, we are possibly familiar with state machine. So, a state machine is like a graph, which consist of a set of states, these squares are the states and a set of transitions for example, this is a transition, it is not a good color, go back to white. So, this is a transition and this is a state, so this a transition and this is a state. So, what the system does is that system actually during it is life cycle or during it is activity, the system actually moves from states to states, through transitions. So, it actually spends most of the time in the states and transitions are generally assumed to be momentary, that it is assumed that insignificant amount of time is required to change states. So, you see that it says, that initially, when you have double square, it means that that is a initial state. So, if initially, if the auto push button is pressed, this is a transition A, which gets activated, which will take place and take the system from state 1 to state 2, if the auto push button is pressed, so this is the transition condition. You can have much more complicated conditions; in this case we have very simple conditions. And then if this transition occurs, then the system comes to state 2. In state 2, again if this transition B takes place, whose condition is part, placed it will come to state 3. So, in this way depending on how the sensors are bringing in signals from the field, the various transitions will be enabled and the system will hop from state to state that is the behavior of the system. On the other hand, these green rectangles indicate, that at each state, which are the outputs, which are on for example, you can see that in state 1, nothing is on. None of the outputs are exercised, while in state 2, the power and lights are on. In state 3, the down solenoid is on; actually this is the state, when the solenoid is coming down. So, you see that this is the initial state here the system is switching on the power and the light and possibly waiting for part placed signal to come. So, it might spend some time here and then it is coming down, so takes times there. Then, from here, it could either go this way or go this way and depending on which one of this has come. So, it may so happen, that the bottom limit switch, if the stop push button has not been pressed. Then eventually, it will the bottom limit is switch signal will come and then it will come to step 4, in which it will activate these outputs. On the other hand, if before the bottom limit switch is pressed, if the stop push button is pressed, then it will come to this state, where it will simply stop and put the power and light off. So, you see, so this is the way using a graph of nodes and edges, we can describe the behavior of the system unambiguously. So, this is what we know, what we call the state transition diagram and these are the outputs, which are exercised at different. And that is actually also in captured, in what is known as an output table, so the output table, says that among the four outputs, that we have namely power, light, switch, part hold, up solenoid and down solenoid. What is their status, whether they are on or off, at in the various states, so there are six states and there are four outputs. So, it says that the power light switch stays on, in state 2, 3, 4 and 6, while the part hold stays on only during 3 and 4, up solenoid is 1, during 4 and down solenoid is 1, during 3, having done that. We can start developing our, we have seen that as the system moves on, the various state logics and transition logics are alternately computed So, first there is a state in which some state logic will be satisfied, depending on that outputs will be exercised, after that at some time transition logic will get satisfied. So, now the system will come to a different state. So, the previous state logic is going to be falsified and the new state logic will now become true and then based on that the corresponding outputs will get exercised. So, this we have to now capture in a relay ladder logic program. So, we will organize our program into three different blocks, the first block will contain, so maybe I will chose this pen now. So, the relay ladder logic will consist of three different blocks, the first block will contain the transitions, this is multiple runs, then the state block and finally, the output block. So, we will now describe, these three blocks in the case of this example. So, let us first see the transition logic, for example what does it say, it says that, if when will transition A logic, will be satisfied. The transition A logic, if you recall brings the system from state 1 to state 2. So, if you wanted to see that, we could go back, just for once, so you see here, transition A takes the system from state 1 to state 2 transition B takes it from state 2 to state 3, transition C and D are in parallel, it could take state 3 to either 4 or 5. So, let us remember this and then go ahead. So, it says that, if the system is in state 1, if it is in state 1; that depends on the state logic. Then, correspondent to every transition we have an output coil and corresponding to every state we have output coils. So, this is actually an auxiliary contact, corresponding to the output coil called state 1, so it is an abstract variable actually. So, it says that if the system is in state 1, then this contact will be made and at that point of time, if the auto push button signal comes, then transition A will get enabled. So, it will be on, if we have modeled our system well, then at a time only one transition will get on. If we do not consider concurrency, then at a time only one transition will get on, now what will happen, in the next stage, transition A becomes on and state 1 was already on, so at this point of time, we come to the state logic. So, now let us see what happens in the state logic, in the state logic, see state 1 was on, because state 1 was on and because auto push button was pressed, transition A became on, so the computation came from the transition logic to the state logic. So, what happens is that, it founds state 1 is on, so what happened is that, it found that the transition A is on at this point of time it found this transition on. Because, transition logic has been already evaluated and it has been found to be true, so therefore, this auxiliary contact will be closed. Therefore, now state 2 will be on, now once state 2 is on, two things happen. Firstly, you see in the next, because state 2 is on, this contact will be on and this contact will now be off. So, in the next can cycle, when this runs will be evaluated, this will go off, therefore, because transition A will go off. So, therefore, state 2, this can go off, it does not matter because this is on, so therefore, this will stay on, it says that, now the system is in state 2. Now, in this way again, when state 2 is on, at that time in the next cycle, some other transition will become enabled, depending on what sensors signals are coming. So, similarly it will turn out, for example in state 3, now you at sometime transition B will take place, so transition B is part placed. So, when the part will be placed, then if that part placed signal comes, then what will happen is that is the transition B will be on and these are not yet enabled, so therefore, state 3 will be on. On the other hand, while state 3 is on, if either transition C or transition D occurs. Transition D is due to the stop push button being pressed and transition C is due to the bottom limit is which being made. If any one of these occurs, then it will no longer be in state 3, but it will go to either transit state 4, if transition C occurs state 3 will be falsified and state 4 will become on. On the other hand, if transition D occurs, then this will be falsified and Trans state 5, which is not shown here will become on. So, you see that mechanically, once we have developed the state graph, we can simply mechanically, describe it is behavior. So, corresponding to every transition, we are going to have one rung, corresponding to every state we are going to have one rung and as I have described, we are going to put the enabling logics. So, we are going to say just from the graph that, if when the system is at state 1, if auto push button is pressed, it will go to state 2. Simple this logic, which is given from the graph will take every transition and will write the corresponding logic in the transition logic. Similarly, we will say that, if transition x has been enabled, then it will reach this state, so we can do it from the graph, mechanically just one by one, this writing can be actually written by a program itself. So, one need not really think too much about the logic one, one should think about the logic, while he is drawing the diagram, after that the programming becomes automatic, this is very useful. So, now next we will output logic, output logic is very, very simple, especially in this case. So, the output logic says, that if you are in state 2, then power light switch should be on as we have given in our output table. So, only thing is that look here, we have also added some manual switch, it can be sometimes, we may need to check, we may need to do things manually also. So, the power light switch will be on here, we have put a manual switch. So, if the PLC is running, that if we press the manual switch, then also power light switch may be made on. Similarly, we can have a manual down push button, so we can, this is just to demonstrate that, you can put additional logic to include manual operation of the system. Otherwise, this program, simply says that while you are in state 3 down solenoid will have to be activated very simple. Compare, this with the kind of programs; that we had written earlier, in fact for this process itself, we had written some programs. So, there we did not have any concept of states and transitions, we were directly trying to write outputs in the form of inputs. Now, this kind of problems is that, they are here systems generally have memory, that is why you need the concept state. It is not that, if you get a certain kind of inputs, you will have to produce certain kinds of outputs, it depends on which state the system is in. So, the concept of state is very important and well you can in bring it possibly in certain cases using some temporary variables, but the kind of here if you look at this program. This program says, it is very complicated logic and I am not even 100 percent sure, it is very difficult to be 100 percent sure, whether this logic is pull proof. It says that, if the auto mode by the way, this auto mode is actually you can, it is an auxiliary contact. Corresponding to some logical variable, which you can set by a simple rung, that if it is auto P B and then you have an auto mode coil and then you have this is auto P B and here you can have auto mode. So, you can have a auto mode coil and this will be an auto mode auxiliary switch, so that the PB can be released, so this is a sort of a you know persistent input. So, if this auto mode is on, so it says that, everything will work only, if the auto mode is on and then if the bottom limit switch is made and the down solenoid is not on, then the up solenoid can be energized. Similarly, and once the up solenoid is energized, it will remain energized until the top limit switch is looks. But, one is never sure and especially when problems will have 200 states, then it will be impossible to write such direct programs. Where the chances will be very high, that if somebody, wants to write it, he will make mistakes. So, this is a typical example of unstructured programming, the kind of things that we did before this lesson. Now, having said these things, so this is in very brief, this is a structured design approach to other programming. Now, we must mention, since we are coming to the end of this programming part, we must mention that there are you know certain standards of programming languages, which have come for example, IEC 1131, IEC stands for the International Electro Technical Commission and the 1131 is a number. So, this is an international standard for PLC programming and it classifies languages into two types, one are graphical languages, things like you know functional block diagrams or ladder diagrams, these are graphical languages. On the other hand there may be some several text based languages, for example structured text or instruction list, these are some of the kinds of programming paradigms. I already told that, although we are learning about the RLL, there are several other programming paradigms, which are also supported by various manufacturers and these are typical examples of them. So, for now we have understood that, this state based design is very useful. So, therefore to be able to capture this state transition logic, a separate language has been proposed. Separate methods of advanced programming have been proposed and we are going to take a brief look at that. So, some of the merit is of this kind of you know, advanced programming which takes this kind of abstract state transition logic. Firstly, because they had open standards in the sense, that anybody can write a program, which can be used by somebody else. Then structuring of programs in terms of module helps in program development and program maintenance, very important program up gradation, you want to add a new functionality, you will find that it is very simple, just change the state diagram. And then find out that, now in the new state diagram, if you have three extra states, simply you have to add those corresponding rungs. If you have some extra transitions, you have to add those rung and if you have to redirect some transitions to now new states, then you have to code that logic, that transitions should now come in that state logic. So, you absolutely know that, which are the places where you have to make change, these are the very standard benefit is of structured programming. Then, it is not necessary that all the times every rung has to be computed, in fact only a few rungs are actually active and the other rungs are inactive. So, there is no point, you can save lot of computational time by skipping those, so this needs to be done, that will improve computational performance and it supports concurrency, that is very important. That is about concurrency, because it very often happens, there are several things, which will be taking place together. And which are best modeled as concurrent and especially now, that we have you know these. So, called multitasking operating systems, all are executives running on microprocessors, so there is no reason, why this cannot be done, I mean we have the technology to enable it. So, therefore we must use it, we have formalism, which is the standard and which are called sequential function charts. So, a sequential function is actually a graphical paradigm, for describing modular concurrent control program structure, so remember that, it is basically used to describe structure and not the program itself. So, you have organized your program into a number of well written functions, which are modularly arranged. So, you just using the sequential function chart, you are just describing, that which of this functions will be executed when and under what condition. So, you have this remark which says that the SFC; merely describes the structural organization of the program modules. While, the actual program statements in the modules, still have to be written, probably using existing PLC programming languages something like you know are either RLL or instruction list or whatever. So, if let quick look at the basic SFC constructs, they are basically state machine like constructs and contains some constructs, which are simple programming constructs with which we are familiar. So, you have basically just like states and transitions, here you have steps and transitions. So each step is actually a control program module, which may be programmed in RLL or any other language, so as I said it is a function. Similarly, this steps can be of two types, number 1, initial step I have marked the initial step and the initial step execution can be of two types. One, when the first time, they are executing after power on and second is, if a program reset, actually sequential function charts have some standard instructions, which will reset, which are assumed to be reset the programs. So, after reset also you can execute a particular type of step called the initial step, typically used for initializing variables and steps and intermediate variables. Otherwise, you have regular steps and which one of them are active at any time, actually depends on the transition logic as we have seen. So, when a step becomes inactive, it is state is initialized and only active steps are evaluated during the scan that saves time. Now, each transition is also a control program module, which evaluates based on available signals operator inputs etcetera, which transition conditions are getting enabled. So, once a transition variable evaluates to true, then the steps following it are activated and we say steps, because we in this formalism. Typically, we have seen thaT 1 transition can go to only one step, provided you do not have concurrency, but since SFC’s support concurrency. So, it may happen, that after a one transition, two concurrent you know threads of execution will start. So, that is why we have said steps following it are activated and those preceding it are deactivated. Only transition following active states, so again when you in a state already, which is active, only certain transitions can occur, so therefore, only those transition need to evaluated, so we only those are evaluated. A transition it can either be a program itself, having embodying very complicated logic or it can be a simple variable like in our example which is simple enough. So, for example, in this case, it says that S 1 is actually an initial step and then while S 1 is active, T 1 is active and if the conditions for T 1 are fulfilled. They will come to S 2, if T 2 will become also active and if T 2 occurs, then it will go to S 3. So, you see that, if you see the activations in scan 1, it may so happen that S 1 and T 1 are active, while in scan 2, it may happen that T 1 has fired, once T 1 has fired, that is actually taken place, because T 1 is active. So, it has come to S 2 and then T 2 is active, so if T 2 active, it is continuously evaluated, it may happen that in scan 3, T 2 is evaluated to be false. So, these two continue to be active, while at scan 4, it may happen that T 2 is now actually become true, so therefore S 3 has become active and then some A transition outgoing from it will become active. So, in this way certain parts of the program at a time, becomes active as the system moves through the sequential function chart. For example there are you can describe several kinds of you know programming constructs for example, this is a case, where the system if at S 1, at while the system is at S 1, T 1 and T 2 are active. So, any one of them can become true and if T 1 is true, S 2 the next active state will be S 2. If T 2 is true, next active state will be S 3, while it could conceivably also happen that T 1 and T 2 in a given stand T 1 and T 2 both become active. So, in that case you have to dissolve, because both transitions cannot take place simultaneously. So, there is a convention that the left most transition becomes actually takes place. So, if T 1 and T 2 will both become active true at a same time then T 1 is assumed to have taken place. So, that was a selective or alternative branch, that is the system can either flow through this branch or flow through this branch, but not both of them simultaneously. So, as I said, that if S 1 is active, T 1 is true and then S 2 becomes active, if T 1 is false and T 2 is true, then S 3 becomes active and then the moment S 3 becomes active, S 1 become inactive. Similarly, left to priority, as I said and only one branch can be active at a time, if S 4 is active and T 3 becomes true, S 6 becomes active and S 4 becomes inactive same thing. So, the similarly just like selective or alternative branches you can also have simultaneous or parallel branches So, this is a new construct, which is not possible to be you implement in a normal state machine, so this is the construct that helps concurrent control programs. The actual realization, how you will rung concurrent executions, it can be made using various you know operating system, methodology such as multitasking. So, here you have a parallel branch, when you see that, at S 1, if T 1 occurs, then both these states become active together and then at some point of time S 4 becomes active, but T 2 cannot take place, unless this branch also comes to S 5. So, only when S 4 and S 5 will become active at that time T 2 will become active and it will be evaluated. If it evaluation is true, then it will come to S 6 and then S 4 and S 5 will become inactive. So, that is same thing is written here, if S 1 is active and T 1 becomes true, S 2 and S 3 become active and S 1 becomes inactive. If S 4 and S 5 are active and T 2 becomes true, then S 6 becomes active and S 4 and S 5 inactive. Similarly, you can have among program control statements, for example, you have a jump, where after S 1, S 2 after T 3, it will go back to S 1. Similarly, you can have a jump inside the loop, you can free, I mean jumping here you can either come to S 5 or you can come to S 1. So, basically you can execute these kinds of program control statements using SFCs. So, a jump cycle jump occurs after transitions and jump into or out of a parallel sequence is illegal, because you cannot jump out of one sequence, when you are in parallel, you must enter them together in any parallel sequence. While, you are executing the sequence, you may be in different states at different arms, but you must enter them together and you must leave them together. Similarly, last you may have termination of a program, so step without a following transition, if you have a step out of which there is no transition; that is a dead state. And it once, it is activated; it can only be deactivated by a special instruction called SFC reset. You can nest programming, you can nest within a concurrent loop, so here if T 1 occurs, S 2 and S 3, then from S 3, if T 2 occurs, then simultaneously S 2, S 4 and S 5 can become active and if the all of them are active, then T 3 will take place and then S 6. Similarly, you see here what is happening, exactly the same thing is happening; here you have a selective branch, so you can either go into S 2 to T 1 or go into S 3 through T 2, that is a selective branch. While you are in a selective branch, you can execute a parallel branch and then if when S 4 and S 5 are both active at that time. If T 2 evaluates to true, then you will come to a jump, which will again take you to S 1. So, such control flows, you can specify, so we have to come to the end of the lesson, in this lesson, what have we learnt, we have learnt the broad steps in the sequence control design and these are very important that first of all identify the inputs and outputs. Then very critically study the system behavior, look at the requirements for manual control operator safety, faults etcetera and then try to formalize this description. So, first may be write the operation in crisps steps in English and then try to convert this description of steps into some short of formalism. And we have in this lecture, we have seen the formalism of sequential, I mean state machines which can be programmed using a sequential function chart. So, this modeling we have seen that how to model a particular application, we took a very simple industrial application, like a stamping process and built it is state machine. And then we have shown that, how given the state machine, how very mechanically, you can arrive at you can actually structure, your program and then write the transition logic and the state logic and the output logic. And we have seen that, this is a very much expected to lead to you know error free programs. And finally, we actually found syntax, I mean some language graphical language, which can actually capture this flow of logic in terms of states and transitions. So, this program control or the program flow description strategy, using sequential function charts, we have seen, now before ending, I think it is nice to look at some problems. For your examples, you can try to create, let us looks at exercise problem number 1, so the problem number one says create an RLL program, for traffic light control. So, what happens, that the lights should have cross walk buttons, for both directions of traffic light. A normal light sequence for both directions will be green, 16 seconds and yellow, 4 seconds. If the cross walk button has been pushed, then a walk light will be on for 10 seconds and the green light will be extended to 24 seconds, so see this much of description is only given now while, what is the task. The first task would be to develop a state machine description and probably write it in terms of a SFC and then while going to do this SFC we will actually encounter several problems. For example, we may find that things are not set, for example let us look at the SFC of this problem. So, we will for the time being we will skip problem number 2 or let us look at problem number 2; next and then we will go to the SFCs. So, the next problem says design a garage door controller using an SFC, so the behavior of the garage door controller is like this, there is a single button in the garage and a single button remote control. When the button is pushed, the door will pushed once and then the door will move up or down. If the button is pushed once, while moving then the door will stop and a second push will start motion again in the opposite direction, so you have a single button, which you are going to push. So, you press it once, it might go up, press it again, it will stop, press it again, it will reverse. So, you have only one single button, so you see that, this is purely sequential behavior. So, unless you have a concept of state, you can never you are actually pressing same button, but sometimes it is stopping, sometimes it is moving up, sometimes it is moving down and this kind of logic, it is not possible to model using only input, output. Then, you have to bring in the concept of memory at state, as you might have noticed, when you have design digital logic circuit. So, you have sequential, whenever you have memory, you have sequential logic. So, let us look at then there are top, bottom limit switches to stop the motion of the door, obviously, you have pressed a button it is going up, it has to stop somewhere. So, you have to have limit is switches, there is a light beam across the bottom of the door, if the beam is cut, while the door is closing, then the door will stop and reverse. So, while the door is closing, suppose if you put something below it that is trying to ensure that the door can actually close. Because, otherwise what will happen is that the door will get stuck, if you have kept anything there. Suppose the back of your car is actually sticking out, then the door will go and the door will go and hit your car. So, the door will close fully, only if there is a light beam, which is not interrupted, which means that there is clearance. Otherwise, that if the light will be interrupted at any time, immediately the door will stop and reverse, it will think that there is something it cannot close it. There is a garage light that will be on for 5 minutes, after the door opens or closes, so you have to use a timer here. So, any time you do an operation, the light is going to go on and stay 5 minutes. So, let us first look at the next which is simple, so here you have the garage door, so let us see how they modeled it, so you have in this case, if button is pressed and so you have step 1 going to step 2 going to step 3, this is the close door. So, let us go back, so there is single button in the garage and a single button remote control. So, you can have either a button pressed in the garage or you can have a button pressed from the remote. So, there are two kinds of buttons, so you have either a button pressed from the garage or from the remote, then we will go to step three and it starts closing the door, this the output. On the other hand, if button has been, if from there, if either a button has been pressed, either from local or from remote or if we press a button, what will happen, then the door will stop. On the other hand, if the limit is switch is made, what will happen the door will stop, if either a button has been pressed or the bottom limit switch is reached, then immediately the door will stop. Or if the light beam is interrupted, then it will not only stop, it will actually reverse, so immediately, it is going to reverse. Here what happens is that, if you have pressed a button and to stop it, then if you press it again, then it will go to reverse. So, this is we have captured the behavior of the garage door in this form using SFC, you can do a similar thing also for the traffic light, let that bearings and exercise, that brings us to the end of this lesson, thank you very much and see you again for the next lesson. Welcome to lesson 22, so far we have learnt about the basic functioning of a PLC, we have learnt, how to write a program for it. But, all these time, we have seen the PLC as an abstract device, we have not seen, what is inside a PLC system, what actually physically makes it. So, in this lesson, we are going to look at the hardware environment of the PLCs. Basically, what PLC systems are made of, so we are going to look at components of the PLC connected and their functionality, that is describing, what they do, let us see before we begin, it is customary to see the instructional objectives. So, the instructional objectives are the following, first going through this lesson, one should be able to mention the distinguishing features of industrial automation computational task. A PLC basically is a computer, so it computes computation it performs computational tasks related to industrial automation. But, this task has certain very distinguishing features, compared to the other tasks, which are let us say done in an office environment. So, we are going to you should be able to mention some of these features, mention major hardware components of a PLC. Describe major typical functional features and performance specifications for the CPU, Central Processing Unit, I O or Input, Output MMI, the Man Machine Interface and communication modules. Then, explain the advantages of a function module, so what are the advantages of a function module, you will be able to explain and describe some major typical function module types, so these are instructional objectives. The program execution in these systems is very interesting, there are generally three four different types of program execution mode specified. For example, a mode could be cyclic, by cyclic we mean that, you have a number of computational tasks, begin just like RLL program execution. So, begin here come to the end start all over again, so this just goes on typically cyclic time remains more or less constant, but it could vary little bit depending on the program logic. For example, if you have some program control statements, if then else kind of statements; then whether some blocks will be executed or not will actually depend on the data. So, program execution time is not always constant, it actually depends on data, but roughly it will be constant and in fact, it is it is preferable, that it is constant. So, that you are not surprised that, for some value of data, suddenly your programming execution takes a very long time and your deadlines are missed. So, it is preferred that real time programs are predictable and not too much varying time requirements. The advantages of distributed network IO are well understood, cost saving on maintaining integrity of high speed signals, because digital comes. Basically, the advantages of digital communication and the advantages of having an intelligent module near the machine. So, you can have good sensor diagnostics, fault can be much more, you know monitoring functions can be realized without overloading the CPU. You can do special functions likes like start up, so in a sense, in such cases the PLC, CPU, really works like a supervisory system and the actual controls is done on the spot, so you have better centralized coordination monitoring. So, we have come to the end of the lecture and I hope you have got a fair idea, about what makes a PLC system and as is customary again, you have some points to ponder. So, think whether you can mention two distinguishing features of industrial automation task, compare to let us say a task in a bank, which are also computational tasks, which also communicate. Mention five major components of a PLC system, we have mentioned more than five, so you should be able to mention five and distinguish between normal distributed and network IO, so here we end today. Thank you very much, we will meet again. ", "transcription_base": " music music music music So, welcome to lesson 21 of the course industrial automation and control. In this lesson, we are going to learn a structure design approach to sequence control. So far we have mainly seen the programming constructs have seen small small program segments, timers, counters. In this lesson for the first time we will see that given a practical problem how to how to study the problem, how to what are the steps that you go through to finally arrive at an RLL program. So, and this will be followed using a very systematic approach because as I have already told you that industrial control applications are very critical in the sense that they if you have programming errors in them they can be very expensive in terms of money or in terms of even can cost human lives etcetera. So, it is always good to have a very systematic design process by which you can decompose the problem and then finally, arrive at a solution. So, we will look at the instruction objectives. The instruction or objectives of this lesson is are firstly, to be able to model simple sequence control applications using state machines. State machine is a is actually a formal method and we advocate to use the formal methods because English can be very ambiguous sometimes contradictory also. So, we have to model it using methods which have which are unambiguous, consistent do not contain contradictions and are also easy to understand and develop. Then from these formal models we have to develop RLL programs for such applications and for doing this there are certain apart from the RLL programs there are some modern programming constructs which are being made available. One of them is the SFC or the sequential function chart. So, we will take a look at that and also understand some of its advantages. So, that is the these are the instructional objectives of this lesson. So, now let us go through the steps in basic broad steps in sequence control design. So, first step is to study the system behavior. This is a very critical step and most of the errors that happen in any programming exercise not only this kind of industrial automation programming any programming mainly arises from the fact that the programmer or the developer did not understand the system well. So this is a very important step and one must first of all identify inputs to the system that is the programmable controller program, what inputs it will take inputs can come from either from sensors in the field or it come from operator interface which I call the MMI or the man machine interface. So somebody presses a push button that is an operator input right. On the other hand some limit switch is made that is a sensor input. Similarly, I identify the outputs. So switch on motor. So motor is an actuator that is a kind of output. There is another kind of output for example switch on some indicator or some lamp that would be a that would be an output which again goes to the MMI or the man machine interface. So, we have to first identify these. Then study the sequence of actions and events under the various operational modes. This is the main task. You have to very carefully understand what is going to happen and what will happen after what at what time intervals etcetera. And one thing that must be very clearly remembered is that when you are developing and industry automation program, one not only has to remember, not only has to design for normal behavior, but one must to some extent at least take into account the possible failures that can occur. Otherwise a system that behaves well under normal behavior can behave in a very nasty manner if some simple element of the system like a sensor fails. Then even apart from the automated behavior one has to examine the requirements that exist for number 1 manual control, manual control is very important because for finally, if the automation equipment fails, it should be able to operate the system using manual control, right maybe, maybe, maybe right on the field, while the automated control may be actually working quite, quite a distance away from the actual equipment. It may be housed in some control room. On the other hand, the manual controls may be near the equipment at the field. So, the possibility of including such manual controls must be examined. Whether some additional sensors are required, some sensors may be there, but to achieve a kind of functionality, some other sensors may be needed, indicators, alarms and as well as operational efficiency or safety. These are the factors which must be considered to finally arrive at the functionality. One must always remember that the customer may not always be able to express his or her needs and at a good automation engineer should be able to supplement it with his own experience in such cases. So, having done that the next step is to convert generally these things are captured manually and using a linguistic description something like you know something like English statement. So, you talk to customers, talk to engineers on the field and get their requirements. But this is very dangerous to use for program development. So, we have to convert this language description into formal process models. And in fact, a lot of you know inconsistencies which are there, ambiguity which are there in the linguistic description actually surface at this time. Even for the during the process of transforming it into a formal process model, one may initially use intermediate forms like you know like flow charts for example, ok. Then finally, but finally, it is prescribed that one should be able to convert it to a formal mathematical framework something like let us say a finite state machine which we will be using here. After up to this the operations are manual having done that then one has to go for design design of the sequence control logic based on the formal model and then finally, one has to implement the control logic in the form of an RLL program and it is preferable that these steps especially the step B is made as much as possible automatic because this is the step which can be done in an automated manner once B and C have been carried out and for large programs it is always preferable to go for automated programming because that will always lead to error free programs provided your specifications were correct. So, we come back to our old stamping process example which we have seen in earlier lectures. So, here is a stamping process we know this process. So, we will we have made some addition to its functionality to be able to explain you know certain features of a system and make it more complete. So, basic principle is the same that there is a piston which and there are two solenoids hydraulically driven piston which goes up and down and makes stampings. So, if we now try to write its list of actions try to create a linguistic description of the process operation it will look something like this. So, instead of A it says that if the auto push button is pressed so that is an operator input, it turns powers and lights on. So, there is possibly a switch once one or more switches that we can we will consider it to be 1 which will turn the power and the light on 1. I mean the moment the auto button is pressed. When a power is detected the thing to be stamped when it is detected, which when placed at the proper place and detected. So, there has to be a part sensor. The press ram, the thing that will be heavy piece which will move and make a stamping, it advances down and it will stop once it makes the bottom limit switch. So, that is again a sensor and you must have actuators to make the ram move down. Then the press, the press then retracts up to up to the top limit switch and stops. So, it makes a stamping and stops all right. On the other hand, due to some reason the operator may be may may like to abort a stamping operation. So, there is a stop push button provided to the operator and a stop push button stops the press only when it is going down, when it is going up it has no effect because any way that is not going to cause any problem. If the stop push button has been pressed, it means that something abnormal might have happened. So, the reset push button must be pressed before the auto push button can be pressed for the next cycle of operation. So, once you have pressed stop, you have to press reset, you know it is a kind of acknowledgement that the emergency has gone away and the automated operation can resume. after retracting the after retracting and then going up the press weights till the part is removed and the next part is detected. So, till the part will be removed and then after that when the next part will be detected again the ram will start coming down. So, this is the English behavior of the system ok. So, now let us try to convert it to an on ambiguous mathematical description. So, first step as I say is to get the process inputs and outputs. So, what are the process? So, here are the process inputs and outputs. So, as inputs we have part sensors which gives two kinds of we generate two kinds of events. One is part placed another is part removed. Then there are three kinds of push button the the auto push button, the stop push button and the reset push button. These three are operator inputs. Then there are the two limits switch sensors, bottom limit switch and top limit switch. For outputs we have four outputs we have an up solenoid, which moves the ram up, we have a down solenoid, which which moves it down, we have the power light switch and we have a part hold, which holds the parts while it is being stamped. So, these are our outputs. Now, we develop a state machine. So, let us try to interpret this diagram. So, what is happening in this diagram is that, let me select my pen. So, what is happening is that you see these squares are the states. We are we are possibly familiar with state machine. So, a state machine is like a graph which consists of a set of states these squares are the states and a set of transitions for example, this is a transition, this is not a good color go back to white. So, this is a transition and this is a state. So, this is a transition and this is a state. So, what the system does is that system the system actually during its light cycle or during its activity the system actually moves from states to states through transitions. So, it actually spends time most of the time in the states and transitions are generally assumed to be momentary that is it is assumed that insignificant amount of time is required to change states. So, you see that it says that initially the when you have double square it means that that the initial state. So, if initially if the auto push button is pressed this is a transition A which gets activated which will take place and take the system from state 1 to state 2 if the auto push button is pressed. So, this is the transition condition you can have much more complicated conditions in this case we have very simple conditions. And then if this transition occurs then the system comes to state 2. In state 2 again if this transition beta explains whose condition is part placed it will come to state 3. So, in this way depending on how the sensors are bringing in signals from the field the various transitions will be enabled and the system will hop from state to state that is the behavior of the system. On the other hand, these green rectangles indicate that at each state which are the outputs which are on. For example, you can see that in state 1 nothing is on, none of the outputs are exercised while in state 2 the power and lights are on. In state 3 the down solenoid is on actually this is the state when the solenoid is coming down. So, you see that this is the initial state here the here the system is switching on the power and the light and possibly waiting for part place signal to come. So, it might spend some time here then it is coming down. So, text time there then from here it could either go this way or go this way and depending on which which one of this have come. So, it may so happen that the bottom limit switch if the stop push button has not been pressed then eventually it will the bottom limit switch signal will come and then it will come to step 4 in which it will activate these outputs. On the other hand if before the bottom limit switch is pressed if the stop push button is pressed then it will come to this state where it will simply stop and and and put the power and light off. So, you see so this is the way using a graph of nodes and edges we can describe the behavior of the system unambiguously. So, now, so this is what we know what we call the state transition diagram and these are the outputs which are exercised at different and that is actually also in capture in what is known as an output table. So, the output table says that among the four outputs that we have namely power light, power light switch, power hold, up solid noise and down solid noise. Which are what are the status whether they are on or off in the various states. So, there are 6 states and there are 4 outputs. So, it says that the power light switch stays on in state 2, 3, 4 and 6 right. While the power hold stays on only during 3 and 4, up sorry noise is 1 during 4 and down sorry noise is 1 during 3. Having done that we can start developing our, so you see we have seen that the as the system moves on the various state logics and transition logics are alternately computed. So, first there is a state in which some some some state logic will be satisfied depending on that outputs will be exercised. After that at some time some transition logic will get satisfied. So, now the system will come to a different state. So, the previous state logic is going to be falsified and the new state logic will now become true and then based on that the corresponding outputs will get exercised. So, this we have to now capture in a relay ladder logic program. So, we will organize our program into three different blocks. The first block, the first block will contain. So, maybe I will choose this pen now. So, the relay ladder logic will consist of three different blocks ok. The first block will contain the transitions. is multiple ranks, then the state block and finally the output block. So, we will now describe these three blocks in the case of this example. So, let us first see the transition logic for example, for example, what does it say? It says that if when will transition A logic will be satisfied, the transition A logic if you recall brings the system from state 1 to state 2. So, if you wanted to see that we could go back just for once. So, you see here transition A takes the system from state 1 to state 2 transition B takes it from state 2 to state 3 transition C and D are in parallel it could take state 3 to either 4 or 5. So, let remember this and then go ahead. So, it says that if the system is in state 1, if it is in state 1 that depends on the state logic, then the corresponding to every transition we have an output coil and corresponding to every state we have output coils. So, this is actually an auxiliary contact corresponding to the output coil called state 1. It is an abstract variable actually. So, it says that if the system is in state 1, then this contact will be made and at that point of time if the auto push button signal comes, then transition A will get enabled. So, it will be on. So, if you have modeled, if you have So if you have model, if you have model our system well then at a time only one transition will get on right. If you have if you do not consider concurrency then at a time only one transition will get on. Now what will happen? Now in the next stage so now transition A becomes on and state 1 was already on. So at this point of time we come to the state logic. So, now let us see what happens in the state logic. In the state logic see state 1 was on right. Now, because state 1 was on and because auto push button was pressed transition A became on. So, the computation came from the transition logic to the state logic. So, what happens is that it found state 1 is on. So, what happened is that it found that the transition A is on at this point of time it found this transition on because transition logic has been already evaluated and it has been found to be true. So, therefore, this auxiliary contact will be closed. So, therefore, now state 2 will be on. Now, one state 2 is on, two things happen. Firstly, you see in the next because state 2 is on, this will be on, this contact will be on and this contact will now be off. So, in the next cycle, in the next scan cycle, when this rungs will be evaluated, this will go off and transition A will go off and this will go off therefore, because transition A will go off so therefore, state 2 this can go off it does not matter because this is on so therefore, this will stay on. So, therefore, it says that now the system is in state 2 Now, when in this way again when state 2 is on at that time in the next cycle some other transition will become enabled depending on what sensors say what sensors signals are coming. So, similarly it will turn out for example, in state 3 now you at some time transition B will take place transition B means that transition B is let us see transition B. So, transition B is part placed correct. So, when the part will be placed then if the if that part place signal comes then what will happen is the transition B will be on and these are not yet enabled. So, therefore, state 3 will be on right. On the other hand while state 3 is on if either transition C or transition D occurs transition D is due to the stop push button being pressed and transition C is due to the bottom limit switch being made. If one of any one of these occurs then it will no longer be in state 3, but it will go to either transition state 4. If transition C occurs state 3 will be falsified and state 4 will become on. On the other hand if transition D occurs then this will be falsified and transition state 5 which is not shown here will become on. So, you see that mechanically once we have developed the state graph we can simply mechanically describe its behavior. So, corresponding to every transition we are going to have one corresponding to every state we are going to have one run and as I have described we are going to put the enabling logics. So, we are going to say just from the graph that if when the system is at state 1 if autofuish button is pressed it will go to state 2. Simple this logic which is given from the graph will take every transition and we will write the corresponding logic in the transition logic. Similarly, we will say that if transition x has been enabled, then it will reach this state. So, we can do it from the graph mechanically just 1 by 1, this writing can be actually written by a program itself. So, one need not really think too much about the logic, one should think about the logic while he is drawing the diagram after that the programming becomes automated this is very useful. So, now next we will have the output coil output logic, output logic is very very simple especially in this case. So, the output logic says that if you are in state 2 then power light switch should be on as we have given in our output table. So, only thing is that look here that we have also added some manual switch you know. It can be sometimes we may need to we may need to check we may need to do things manually also. So, the power light switch will be on here we have put a manual switch. So, if the PLC is running then if you press the manual switch then also power light switch may be made on. Similarly, we can have a manual down push button. So, we can this is just to demonstrate that you can put additional logic to include manual operation of the system. So, in this so, otherwise this program simply says that while you are in state 3 down solenoid will will have to be activated very simple. Compare this with the kind of programs that we had written earlier in fact, for this process itself we have written some program. So, there we did not have any concept of states and transitions we were directly trying to write outputs in the form of inputs. Now, the problem with this kind of problems is that they are just here systems generally have memory that is why you need the concept states. It is not that if you if you get a certain kind of inputs you will have produce certain kinds of outputs. It depends on which state the system is e. So, the concept of state is very important and well you can you can bring it down in bring it possibly in certain cases using some temporary variables, but the kind of here you see if you if you if you look at this program this program says it is very complicated logic and I am not even 100 percent sure it is very difficult to be 100 percent sure whether whether this logic is is is full proof it says that if the auto mode by the way this this this this auto mode is actually a you can you can it is a it is an auxiliary contact corresponding to some logical variable which you can set by a by a by a simple run that if it is auto PB and then you have an auto mode coil and then you have this is auto PB and here you can have auto mode. So, you can have a auto mode coil and this will be an auto mode auxiliary switch so that the PV can be released. So, this is a sort of you know persistent input. So, if this auto mode is on, so it says that this all the everything will work only if the auto mode is on and then if the bottom limit switch is made and the down solenoid is not on then the up solenoid can will be energized. Similarly, once the obscenity is energized, it will remain energized until the top limit switch looks ok, but one is never sure and especially when problems will have 200 states, then it will be impossible to write such direct programs. When a chance will be very high that if somebody wants to write it, he will make mistakes. So, this is a typical example of unstructured programming, the kind of things that we did before this lesson. Now, having said these things, so this is this is this is in very brief this is a structured design approach to our programming. Now, we must mention since we are coming to the end of this programming part, we must mention that there are you know certain standards of programming languages which have come for example, IEC 1131, IEC stands for the International Electrotechnical Commission and the 1131 is a number. So, this is an international standard for PLEC programming and it classifies languages into two types. One is grown on graphical languages things like you know functional block diagrams or ladder diagrams. These are graphical languages. On the other hand there may be some several text based languages for example, structured text or instruction list. These are some of the kinds of programming paradigms I already told that although we are learning about the RLL, there are several other programming paradigms which are also supported by various manufacturers and these are typical examples of them. So, for now we have understood that this state based design is very useful. So, therefore, to be able to capture this state transition logic, a separate language has been proposed, separate methods of programming advanced programming have been proposed and we are going to take a brief look at that. So, some of the merits of this kind of you know advanced programming which takes this kind of abstract state transition logic firstly because they have open standards in the sense that anybody can write a program which can be used by somebody else. Then structuring of programs in terms of module helps in program development and program maintenance, very important program obligation you want to add a new functionality, you will find that it is very simple. Just change the state diagram and then find out that now in in the new state diagram if you have three extra states simply you have to add those corresponding rungs. If you have some extra transitions you have to add those rungs and if you have to redirect some transitions to now new states then you have to code that logic that is a that transition should now come in that state logic. So, you absolutely know that which are the places where you have to make change these are the very standard benefits of structured programming. Then it is not necessary that all the times every rank has to be computed. In fact, only a few ranks are actually active and the other ranks are inactive. So, there is no point you can save lot of computational time by skipping those. So, this needs to be done that will improve computational performance and it supports concurrency that is very important that it supports concurrency because it very often happens that there are many things, several things which will be taking place together and they which are which are best modeled as concurrent and especially now that we have you know these so called multitasking operating systems or or executives running on microprocessors. So, there is no reason why this cannot be done I mean we have the technology to enable it. So, therefore, we we must use it. So, therefore, we have a formalism which is standard and which are called sequential function charts. So, a sequential function chart is actually a graphical paradigm for describing modular, concurrent, control program structure. So, remember that it is basically used to describe structure and not the program itself. So, which part so as if you have organized your program into a into a number of well written functions which are modularly arranged. So, you just using the sequential function chart you are just describing that which of these functions will be executed when and under what condition. So, you have this remark which says that the SFC merely describes the structural organization of the program modules. While the actual program statements in the modules still have to be written probably using existing PLC programming languages something like you know either RLL or instruction least or whatever. So, if let us look a quick look at the basic SFC constructs there with they are basically state machine like constructs and and contain some constructs which are which are simple programming constructs with which we have familiar. So, you have basically just like states and transitions here you have steps and transitions. So, each step is actually a control program module which when we programmed in RLL or any other language. So, as I said it is a function. Similarly, this steps can be of two types number one initial step I had remarked the initial step and the initial step execution can be of two types. One is when the first time you are executing after power on and second is if a program reset actually sequential function charts have some standard instructions which will reset which are assumed to be reset, assumed to reset the program. So, after reset also you can execute a particular type of step called the initial step typically use for initializing variables and states and intermediate variables. Otherwise, you have regular steps which one of them are active at any time actually depends on the transition logic as we have seen. So, when a step becomes inactive it state is initialized and only active steps are evaluated during this scan that saves time. Now, we have transitions. So, in transition each transition is also a control program module which evaluates based on available signals, operator inputs etcetera which transition conditions are getting enabled. So, once a transition variable evaluates it true then the steps following it are activated and we say steps because be in this formalism typically we have seen that one transition can go to only one step provided you do not have concurrency, but since SFC is support concurrency. So, it may happen that after a one transition to two concurrent you know thread distribution will start. So, that is why we have said steps following interactivated and those preceding it are deactivated. Only transition following active state. So, again when you are in a state already which is active only certain transitions can occur. So, therefore, only those transitions need to be evaluated. So, we only those are evaluated. A transition can also be a it can either be a it can either be a program itself having embodying very complicated logic or it can be a simple variable like in our example which is simple enough. So, for example, in this case it says that S 1 is an S 1 is actually an initial step and then if transition while it is in while S 1 is active T 1 is active and if the conditions for T 1 are fulfilled then it will come to S 2 if they if a while at S 2 if T 2 will become also active and and if T 2 occurs then it will go to S 3. So, you see that if you see the activations in scan 1 it may so happen that S 1 and T T 1 are active while in scan 2 it may happen that T 1 has fired. Once T 1 has fired that is actually take take in place because T 1 is active. So, it has come to S 2 and then T 2 is active. So, if T 2 is active it is it is it is continuously evaluated it may happen that in scan 3 T 2 is evaluated to be false. So, these to continue to be active toilets can for it may happen that T2 is now actually become true. So, therefore, S3 has become active and then some any transition outgoing from it will become active. So, this is in this way certain parts of the program at a time becomes active as the system moves through the sequential function chart. For example, there you can you can describe several kinds of you know programming constructs for example, this is an I am sorry. So, for example, this is a this is the case where this is stem if at S 1 at while the system is at S 1 T 1 and T 2 are active. So, any one of them can become true and if T 1 is true S 2 the next active state will be S 2 if T 2 is true true the next active state will be S 3, but it could considerably also happen that S 2 T 1 and T 2 in a given scan T 1 and T 2 both become active. So, in that case you have to resolve because both transition cannot take place simultaneously. So, you have to say that so there is a there is a convention that the left most transition becomes takes place actually takes place. So, if T 1 and T 2 will both become active at the same time true at the same time then t 1 is assumed to have taken place. So, that was a selective or alternative branch that is the the the system can either flow through this branch or flow through this branch, but not both of them simultaneously. So, as I said that if S 1 is active t 1 is true and then S 2 becomes active, if t 1 is false and t 2 is true then S 3 becomes active and then the moment s 3 becomes active s 1 becomes inactive. Similarly, left to right priority as I said and only one branch can be activated at time. If s 4 is active and t 3 becomes true s 6 becomes active and s 4 becomes inactive same thing. Similarly, just like selective or alternative branches you can also have simultaneous or parallel branches. So, this is the new construct which is which we which is not possible to be complemented in a normal state machine. So, this is the construct that helps concurrent control programs. The actual realization how you will run concurrent executions can be made using various you know operating system methodology such as multitasking. So, here you have a parallel branch. So, when you see that address one if T 1 occurs then both these states become active together and then at some point of time s 4 becomes active but t 2 cannot take place unless this branch also comes to s 5. So, only when s 4 and s 5 will become active at that time t 2 will become active and it will be evaluated. So, if it evaluates to true then it will come to s 6 and then s 4 and s 5 will become inactive. So, if that is there is same thing is written here if S 1 is active and t 1 becomes true S 2 and S 3 become active and S 1 becomes inactive. If S 4 and S 5 are active and t 2 becomes true then S 6 becomes active and S 4 and S 5 inactive. Similarly, you can have you know among program control statements you can have jumps. So, for example, you have a jump where after S 1, S 2 after T 3 it will go back to S 1. Similarly, you can have a jump inside the loop, you can free I mean jumping here, you can either come to S 5 or you can come to S 1. So, basically you can execute this kind of program control statements using S F C's. So, jump cycle jump occurs after transitions and jump into a route of parallel sequence is illegal because you cannot jump out of one sequence when you are in parallel. You must enter them together in any parallel sequence, while you are executing the sequence you may be in different states at different arms, but you must enter them together and you must leave them together. Similarly last you may have termination of a program. So step without a following transition, if you have a step out of which there is no transition then it is a dead state and it once it is activated it can only be deactivated by an special instruction called SFC reset. You can nest programming you can nest a concurrent loop within a concurrent loop. So, here F T 1 occurs S 2 and S 3 then from S 3 if T 2 occurs then simultaneously S 2 S 4 and S 5 can become active and if all of them are active then t 3 will take place and then S 6. Similarly, you see here what is happening, this exactly the same thing is happening that you here you have a selective branch. So, you can have you can you can either go into S 2 through t 1 or go into S 3 through t 2 that is a selective branch. While you are in a selective branch, you can execute a parallel branch and then if when S 4 and S 5 are both active at that time if t 2 evaluated true, then you will come to a jump which will again take you to S 1. So, such control flows you can specify. So, we have we are we have come to the end of the lesson ah in this lesson we have what have we learnt we have learnt the broad steps in the in the sequence control design and these are very important that first of all identify the inputs and outputs. Then very critically study the system behavior look at the requirements for manual control operator safety faults etcetera and then try to formalize this description. So, first maybe write the operation in crisp steps in English and then try to convert this description of steps into some sort of formalism and we have in this lecture we have seen the formalism of sequential I will state machines which can be programmed using a sequential function chart. So, this modeling we have seen that how to model a particular application, we took a very simple industrial application like a stamping process and built its state machine. And then we have shown that how given the state machine how very mechanically you can arrive at a how you can you can actually structure your program and then write the transition logic and the state logic and the output logic. And we have seen that this is very much expected to lead to you know error free programs. And finally, we actually found a syntax some language graphical language which can actually capture this flow of logic in terms of states, states and transition. So, this program control or the program flow description strategy using sequential function charts we have seen. So, now, before ending I think it is nice to look at some problems. So, for your examples you can try you can try to create let us look at exercise problem number 1. So, the problem number 1 says create an RLL program for traffic light control. So, what happens that the lights should have crosswalk buttons for both direction of traffic lights. A normal light sequence for both directions will be green, a normal light sequence for both directions will be green 16 seconds and yellow 4 seconds. If the crosswalk button has been pushed then a walk light will be on for 10 seconds. If the crosswalk button has been pushed then a walk light so will be on for 10 seconds and the green light will be extended to 24 seconds right. So, this is see this much of description is only given. Now, while while so, what is the what is the task the first task would be to develop a state machine description and probably write it in terms of write it in terms of and and and SFC and then while while while going to do this this SFC we will actually encounter ah we can encounter several ah problems. For example, we may find that things are not something are not state. For example, let us look at the let us look at the SFC of this problem. So, we will for the time being we will skip problem number 2 or or or ok let us look at problem number 2 next and then we will go to the SFC. So, the next problem says design a garage door controller using a R S F C. So, the behavior of the garage door controller is like this. There is a single button in the garage and a single button remote control. When the button is pushed, the door will push once, when the button will be pushed once, then the door will move up or down. So, you push it once it will move up or down. If the button is pushed once while moving then the door will stop and a second push will start motion again in the opposite direction. So, you have a single button which you are going to push. So, you press it once it might go up, press it again it will stop, press it again it will reverse. So, you have only one single button. So, you see that this is purely sequential behavior. So, unless you have a concept of state you can never you are actually pressing the same button, but sometimes it is stopping, sometimes is moving up, sometimes is moving down and this kind of logic it is not possible to model using only input output. Then you you have to bring in the concept of memory at state as you might have noticed when you have design digital logic circuits. So, you have sequential when you have whenever you have memory, you have sequential logic. So, let us look at then there are top bottom limits switches to stop the motion of the door. Obviously, you have pressed a button, it is going up, it has to stop somewhere. So, you have to have limits switches. There is a light beam across the bottom of the door, if the beam is cut while the door is closing, then the then the door will stop and reverse. So, if you have while the door is closing, suppose if you put something below it then then that is you are trying to ensure that the door can actually close. Otherwise what will happen is that the door will get stuck if you have kept anything there suppose the back of your car is actually sticking out. Then the door will go and hit your the door will go and hit your car. So, so the door will close fully only if there is a light beam which is not interrupted which means that the there is clearance. Otherwise that if the light beam is interrupted at any time immediately the door will stop and reverse. It will think that there is something it cannot close it. There is a garage light that will be on for 5 minutes after the door opens or closes. So you have a garage light which can be opened for 5 minutes after the door opens or closing. So you have to use the timer here. So anytime you do an operation the light is going to go on and stay 5 minutes. So we have the two, let us first look at the next one which is simpler. So here you have the garage door. So let us see how they have modeled it. you have in this case. So, if button is pressed and so you have step 1 going to step 2 going to step 3 this is the closed door. So, if let us see let us go back. So, there is a single button in the garage and a single button remote control. So, you can have either a button pressed in the garage or you can have a button pressed in, button pressed from the remote. Right? So, there are two kinds of buttons. So, you have either a button pressed from the garage or from the remote, then we go to step 3 and it will start closing the door. This is the output. On the other hand if button has been so if from there if either a button has been pressed either from local or from remote or if so if we if we press a button what will happen then the door will stop. On the other hand if the if the limit switch is made what will happen the door will stop. So if either a button has been pressed or the bottom limit switch is reached then immediately the door will stop or if the light beam is interrupted then it will not only stop it will actually reverse right. So, immediately it is going to reverse here what happens is that if you if you have pressed a button and to stop it then if you press it again then it will go to reverse. So, this is a you see we have we have captured the behavior of the garage door in the in this form using SFC. So, you can do you can do a similar thing also for the traffic light that let that be an and and exercise. So, that brings us to the to the end of this lesson. Thank you very much and see you again for the next lesson. Bye bye. Welcome to lesson 22. So far we have learnt about the basic functioning of a PLC, we have learnt how to write a program for it, but all this time we have seen the PLC as an abstract device. We have not seen what is inside a PLC system, what actually physically makes it. So in this lesson we are going to look at the hardware environment of the PLCs basically what PLC systems are made of right. So we are going to look at components, we are going to look at components of the PLC their architecture by which I mean that how they are organized, how they are connected and their functionality that is describing what they do. So, now, so let us see. Before we begin, it is customary to see the instructional objectives. So, the instructional objectives are the following. First, after going through this lesson, one should be able to mention the distinguishing features of industrial automation computational tasks. PLC basically is a computer. So, it computes, it performs computational tasks related to industrial automation, but these tasks have certain very distinguishing features compared to the other tasks which are let us say done in an office environment. So, we are go to should be able to mention some of these features, mention major hardware components of a PLC, described major typical functional features and performance specifications for the CPU central processing unit, IO or input output, MMI, the man machine interface and communication modules. Then explain the advantages of a function module. So, what are the advantages of a function module which you will be able to explain and describe some major typical function module types. So, these are the instruction objectives. The program execution in these systems is very interesting there are generally three or four different types of program execution more specified. For example, a mode could be cyclic by cyclic we mean that you have a you have a number of computational tasks begin just like RLL program execution. So, begin here come to the end start all over again. So, this just goes on typically cycle time remains more or less constant, but it could vary little bit depending on the program logic. For example, if you have a if you have some program control statements, if then else kind of statements, then whether some block will be executed or not will actually depend on the data. So, program execution time is not always constant, it actually depends on data, but roughly it will be constant and in fact, it is it is preferable that it is constant so that you are not surprised that that for some value of data suddenly your suddenly your program execution takes a very long time and your deadlines are missed. So, it is preferred that real time programs have predictable and not too much varying time requirements. The advantages of distributed network IOR well understood cost saving on maintaining integrity of high speed signals because digital come basically the advantages of digital communication and the advantages of having an intelligent module near the machine. So, you can have good sensor diagnostics false fault can be much more you know monitoring functions can be realized without overloading DCPU you can do special functions like like start up. So, in a sense in such cases the PLCCPU it really works like a supervisory system and the actual control system on the spot. So, you have better centralized coordination monitoring. So, we have come to the end of the lecture and so, we have I hope you have got a fairly a fair idea about what makes a PLC system and as is customary again you have some points to point that. So, think of think whether you can mention two distinguishing features of industrial automation tasks compared to let us say a task in a bank with which are also computational tasks which also communicate. Mention 5 major components of a PLC system we have mentioned more than 5 so you should be able to mention 5 and distinguish between normal distributed and network IO. So here we end today, thank you very much, we will meet again. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it. I'm going to do it.", "transcription_medium": " we have here to lesson twenty one of the course industrial automation and control. In this lesson, we are going to learn a structured design approach to sequence control. So far we have mainly seen the programming constructs, have seen small small program segments, timers, counters. In this lesson for the first time, we will see that given a practical problem, to how to study the problem, how to what are the steps that you go through to finally arrive at an RLL program. So and and this will be followed using a very systematic approach, because as I have already told you that industrial control applications are very critical, in the sense that the if you have programming errors in them, they can be very expensive in terms of money or in terms even can cost human lives, etcetera. So it is always good to have a very systematic design process by which you can decompose a problem and then finally arrive at a solution. So we will look at the instruction objectives. The instruction objectives of this lesson is are firstly to be able to model simple sequence control applications using state machines. State machine is a is actually a formal method and we advocate the use of formal methods because English can be very ambiguous, sometimes contradictory also. So, we have to model it using methods which have, which are unambiguous, consistent, do not contain contradictions and are also easy to understand and develop. Then from these formal models, we have to develop RLL programs for such applications. And for doing this, there are certain, apart from the RLL programs, there are some modern programming constructs which are being made available, one of them is the SFC or the sequential function chart. So, we will take a look at that and also understand some of its advantages. So that is the, these are the instructional objectives of this lesson. So now, let us go through the steps in basic broad steps in sequence control design. So first step is to study the system behavior, this is a very critical step and most of the errors that happen in any programming exercise, not only this kind of industrial automation programming, any programming mainly arises from the fact that the programmer or the developer did not understand the system well. So, this is a very important step and one must first of all identify inputs to the system, that is the programmable controller program, what inputs it will take? Inputs can come from either from sensors in the field or it comes from operator interface which I call the MMI or the man machine interface. So somebody presses a push button that is an that is an operator input right. On the other hand some limit switch is made that is a sensor input. Similarly identify the outputs, so switch on motor, so motor is an actuator that is a kind of output. There is another kind of output for example, switch on some indicator or some lamp that would be a that would be output which again goes to the MMI or the man machine interface, so so we have to first identify these. Then study the sequence of actions and events under the various operational modes, this is the main task, you have to very carefully understand what is going to happen and what will happen after what, at what time intervals, etcetera. Then one thing that must be very clearly remembered is that is that, when you are developing an an an an industrial automation program, one not only has to remember not only has to design for normal normal behavior, but one must to some extent at least take into account the possible failures that can occur, otherwise a system that behaves well under normal behavior can behave in a very nasty manner, if some simple element of the system like a sensor fails, right. Then even apart from the automated behavior, one has to examine the requirements that exist the requirements that exist for number one, manual control. Manual control is very important because for for finally if the automation equipment fails, it should be able to operate the system using using manual control, right may be may be may be right on the field, while the automated control may be actually working quite quite a distance away from the actual actual equipment, it may be housed in some control room. On the other hand the manual controls may be near the equipment at the field, so the possibility of including such manual controls must be examined, whether some additional sensors are required, some sensors may be there but to achieve a kind of functionality, some other sensors may be needed, indicators, alarms and as well as operational efficiency or safety, these these these are the factors which must be considered to finally arrive at the functionality. One must always remember that, the the customer may not always be able to express his or her needs and and at a good automation engineer should be able to supplement it with his own experience in such cases. So having done that, the next step is to convert generally these things are captured manually and using a linguistic description, something like you know something like English statement. So so you talk to customers, talk to engineers on the field and get their requirements. But this is very dangerous to use for program development, so we have to convert this linguist description into formal process models. And in fact a lot of you know inconsistencies which are there, ambiguities which are there in the linguistic description actually surface at this time. Even for the during the process of transforming it into a formal process model, one may initially use intermediate forms like you know like flow charts for example, okay. Then finally, but finally it is prescribed that one should be able to convert it to a formal mathematical frame work something like let us say a finite state machine which we will be using here. After up to this the the operations are manual having done that then one has to go for design design of the sequence control logic based on the formal model. And then finally one has to implement the control logic in the form of an RLL program. And it is preferable that these steps, especially the step D is made as much as possible automatic, because this is a step which can be done in an automated manner once B and C have been carried out. And for large programs, it is always preferable to go for automated programming, because that will always lead to error free programs provided your specifications were correct. So we come back to our old stamping process example, which we have seen in earlier lectures. So here is a stamping process, we know this process, so we will we have made some addition to its functionality to be able to explain, you know certain features of a system and make it more complete. So basic principle is the same that there is a piston which and there are two solenoids, hydraulically driven piston which goes up and down and makes stampings. So if we now try to write its list of actions, try to create a linguistic description of process operation, it will look something like this. So, in step a, it says that if the auto push button is pressed, so that is an operator input, it turns powers and lights on. So, there is possibly a switch, once one or more switches, but we will consider it to be one, which will turn the power and the light on, I mean the moment the auto button is pressed. When a part is detected, the thing to be stamped when it is detected, it means placed at the proper place and detected. So, there has to be a part sensor. The press ram, the thing that will be heavy piece which will move and make a stamping, it advances down and it will stop once it makes the bottom limit switch. So, that is again a sensor. and you must have actuators to make the ram move down, then the press the press then retracts up to up to the top limit switch and stops. So, it makes a stamping and stops, all right. On the other hand due to some reason the operator may be may may like to abort a stamping operation. So, there is a stop push button provided to the operator and a and a stop push button stops the press only when it is going down, when it is going up it has no effect, because anyway that is not going to cause any problem. If the stop push button has been pressed, it means that something abnormal might have happened. So the reset push button must be pressed before the auto push button can be pressed for the next cycle of operation. So once you have pressed stop, you have to press reset, you know it is a it is a kind of acknowledgement that the emergency has gone away and the automated operation can resume. Finally, after retracting the the after retracting and then going up, the press waits till the part is removed and the next part is detected. So, till the part will be removed and then after that when the next part will be detected, again the ram will start coming down. So, this is the English behavior of the system, okay. So now let us try to convert it to an unambiguous mathematical description. So first step as I said is to get the process inputs and outputs. So what are the process, so here are the process inputs and outputs. So as inputs we have part sensors which gives two kinds of, which generates two kinds of events. One is part placed, another is part removed. Then there are three kinds of push button, the auto push button, the stop push button and the reset push button. These three are operator inputs. Then there are the two limit switch sensors, bottom limit switch and top limit switch. For outputs, we have four outputs. We have an up solenoid which moves the ram up, we have a down solenoid which moves it down, we have the power light switch and we have a part holder which holds the parts while it is being stamped. So, these are our outputs. Now, we develop a state machine. So, let us try to interpret this diagram. So, what is happening in this diagram is that, let me select my pen. So, what is happening is that, you see these are the outputs. So, these squares are the states. We are we are possibly familiar with state machines. So, a state machine is like a graph, which consist of a set of states, these squares are the states and a set of transitions. For example, this is a transition, this is not a good color, go back to white. So, this is the transition and this is a state. So, this is a transition and this is a state. So, what the system does is that, system the system actually during its life cycle or during its activity, the system actually moves from states to states through transitions. So it actually spends time, most of the time in the states and transitions are generally assumed to be momentary, that is it is assumed that insignificant amount of time is required to change states. So you see that it says that initially the when you have double square it means that that is the initial state. So if initially if the auto push button is pressed, this is the transition A which gets activated, which will take place and take the system from state one to state two, if the auto push button is pressed. So, this is the transition condition. You can have much more complicated conditions, in this case we have very simple conditions. And then if this transition occurs, then the system comes to state two. In state two again, if this this transition B takes place, whose place, whose condition is part placed, it will come to state three. So, in this way depending on how the sensors are bringing in signals from the field, the various transitions will be enabled and the system will hop from state to state, that is the behavior of the system. On the other hand, these these these green rectangles indicate that at each state, which are the outputs which are on. For example, you can see that in state one nothing is on, none of the outputs are exercised, while in state two the power and lights are on, in state three the down solenoid is on, actually this is the state when when the solenoid is coming down. So, you see that this is the initial state, here the here the system is switching on the power and the light and and possibly waiting for part place signal to come, so it it might spend some time here, then it is coming down, takes time there. Then from here, it could either go this way or go this way and depending on which which one of this have come. So, so it may so happen that the bottom limit switch, if the stop push button has not been pressed, then eventually it will the the bottom limit switch signal will come and then it will come to step four, in which it will activate these outputs. On the other hand, if before the bottom limit switch is pressed, if the stop push button is pressed, then it will come to this state, where it will simply stop and and and put the power and light off. So, you see, so this is the way using a graph of nodes and edges, we can describe the behavior of the system unambiguously. So now, so this is what we know, which we call the state transition diagram, and these are the outputs which are exercised at different and that is actually also in captured in what is known as an output table. So, the output table says that among the four four outputs that we have namely power, light, part hold, power light switch, part hold, up solenoid and down solenoid which are what are their status whether they are on or off at in the various states. So, there are six states and there are four outputs. So, it says that the power light switch stays on in state two, three, four and six, right. While the part hold stays on only during three and four, up solenoid is one during four and down solenoid is one during three. Having done that, we can start developing our… So, you see we have seen that the as the system moves on, the various state logics and transition logics are are are alternately computed. So, first there is a state in which some some some state logic will be satisfied, depending on that outputs will be exercised. After that at some time, some transition logic will get satisfied. So now the system will come to a different state, so the previous state logic is going to be falsified and the new state logic will now become true and then based on that the corresponding outputs will get exercised. So this we have to now capture in a relay ladder logic program, right. So so we we will organize our program into three different three different blocks. The first block, the first block will contain, so may be I will choose this pen now. So, the relay ladder logic will consist of three different blocks. The first block will contain the transitions, this is multiple rungs, this is the first then the state block and finally the output block. So, we will now describe these three blocks in the case of this example. So, let us first see the transition logic. For example, for example what does it say? It says that if when will transition A logic will be satisfied? The transition A logic if you recall brings the system from state one to state two. So, if you wanted to see that we could go back just for once. So, if So, you see here transition A takes the system from state one to state two, transition B takes it from state two to state three, transition C and D are in parallel, it could take state three to either four or five. So, let us remember this and then go ahead. So, it says that if the system is in state one, if it is in state one that depends on the state logic, then the corresponding to the state corresponding to every transition we have an output coil and corresponding to every state we have output coils. So, this is actually an auxiliary contact corresponding to the output coil called state one, it is an abstract variable actually. So, it says that if it the system is in state one, then this contact will be made and at that point of time if the auto push button signal comes, then transition A will get enabled, so it will be on, right. So, if we have modeled, if we have modeled our system well, then at a time only one transition will get on, right. If we have, if we do not consider concurrency, then at a time only one transition will get on. Now what will happen? Now in the next stage, so now transition A becomes on and state one was already on. and state one was already on. So, at this point of time we come to the state logic. So, now let us see what happens in the state logic. In the state logic, see state one was on right. Now, because state one was on and because auto push button was pressed, transition a became on. So, the computation came from the transition logic to the state logic. So, what happens is that it found state one is on. So, what happened is that it found that the transition a is on at this point of time, it found this transition on because transition logic has been already evaluated and it has been found to be true. So, therefore, this auxiliary contact will be closed. So, therefore, now state two will be on. Now, one state two is on, two things happen. Firstly, you see in the next, because state two is on, this will be on, this contact will be on, and this contact will now be off. So, in the next cycle, in the next scan cycle, when this rungs will be evaluated, this will go off, and because and transition A will because see this will go off and this will go off therefore because transition A will go off. So therefore, state two this can go off it does not matter because this is on. So therefore, this will stay on. So therefore, it says that now the system is in state two, right. Now when in this way again when state two is on, at that time in the next cycle some other transition will become enabled, depending on what sensors what sensor signals are coming. So similarly, it will turn out for example, in state three, now you at some time transition B will take place, transition B means transition Transition b means that transition b is, let us see transition b. So transition b is part placed, correct. So when the part will be placed, then if that part place signal comes, then what will happen is the transition b will be on and these are not yet enabled. So therefore, state three will be on, right. On the other hand, while state three is on, if either transition c or transition d occurs, transition d is due to the stop push button being pressed and transition c is due to the bottom limit switch being made. If one of the any one of these occurs, then it will no longer be in state three, but it will go to either transition state four. If transition c occur state three will be falsified and state four will become on. On the other hand if transition D occurs then this will be falsified and then state five which is not shown here will become on. So, you see that mechanically once we have developed the state graph, we can simply mechanically describe its behavior. So, so corresponding to every transition we are going to have one rung, corresponding to every state we are going to have one run and as I have described, we are going to put the enabling logics. So we are going to say just from the graph, that if when the when the system is at state one, if auto push button is pressed, it will go to state two. Simple this logic which is given from the graph, will take every transition and will write the corresponding logic in transition logic. Similarly, we will say that if transition x has been enabled then it will reach this state. So we can do it from the graph mechanically just one by one, this this writing can be actually written by a program itself. So one need not really think too much about the logic, one one should think about the logic while he is drawing the diagram, after that the programming becomes automated, this is very useful, okay. So, now next we will have the output coil, output logic. Output logic is very simple, very very simple, especially in this case. So, the output logic says that, if you are in state two, then power light switch should be on, as we have given in our output table. So, only thing is that, look look here that, we have we have also added some manual switch, you know. It can be sometimes we may need to we may need to check, we we may need to do things manually also. So, the power light switch will be on, here we have put a manual switch. So, if the PLC is running then if you press the manual switch, then also power light switch may be made on. Similarly, we can have a manual down push button. So, we can this is just to demonstrate that you can put additional logic to include manual operation of the system. So in this so otherwise this program simply says that while you are in state three, down solenoid will will have to be activated, very simple. Compare this with the kind of programs that we had written earlier, in fact for this process itself we had written some programs. So there we did not have any concept of states and transitions, we were directly trying to write outputs in in the form of inputs. Now the problem with this kind of problems is that, they are here systems generally have memory, that is why you need the need the concept states. It is not that if you if you get a certain kind of inputs, you will have produce certain kinds of outputs, it depends on which state the system is in. So the concept of state is very important, and well you can you can bring it down in bring it possibly in certain cases using some temporary variables, but the kind of here you see if you if you if you look at this program this program says it is very complicated logic and I am not even hundred percent sure it is very difficult to be hundred percent sure whether whether this logic is is is full proof. It says that if the auto mode by the way this this this auto mode is actually a you can you can it is a it is an auxiliary contact corresponding to some logical variable, which you can set by a by a by a simple run that if it is auto p b and then you have an auto mode coil and then you have this is auto p b and here you can have auto mode. So, you can have a auto mode coil and this So, you can have a auto mode coil and this will be an auto mode auxiliary switch, so that the so that the p v can be released. So, this is a sort of a you know persistent input. So, if this auto mode is on, so it says that this all the everything will work only if the auto mode is on and then if the bottom limit switch is made and the down solenoid is not on, then the up solenoid can will will be energized. Similarly, and once the up solenoid is energized, it will remain energized until the top limits which is looks looks okay, but but one is never sure and and especially when when when problems will will have two hundred states, then it will be impossible to write such direct programs. A chances will be very high that if somebody wants to write it, you will make mistakes. So so this is a typical example of unstructured programming the kind of things that we did before this lesson. Now having said these things, so this is this is this is in very brief, this is a structured design approach to RL programming. Now we must mention since we are coming to the end of this programming part, we must mention that there are you know certain standards of programming languages which have come. For example, IEC 1131, IEC stands for the International Electrotechnical Commission and 1131 is a number. So, this is an international standard for PLC programming and it classifies languages into two types. One is one of the graphical languages, things like you know functional block diagrams or ladder diagrams. These are graphical languages. On the other hand, there may be some several text based languages. For example, structured text or instruction list. These are some of the kinds of programming paradigms. I already told that that although we are learning about the RLL, there are several other programming paradigms which are also supported by various manufacturers and these are typical examples of them. So so for now we we we have understood that this state based pro design is very useful. based design is very useful. So therefore, to to to be able to capture this state transition logic, a separate language has been proposed, separate methods of programming, advanced programming have been proposed and we are going to take a brief look at that. So some of the merits of this kind of you know, advanced programming which takes this kind of abstract state transition logic, firstly because they have they have they have open standards in the sense that, anybody can write a program which can be used by somebody else. Then structuring of programs in terms of module helps in program development and and program maintenance, very important program up gradation, you want you want to add a new functionality, you will find that it is very simple. Just change the state diagram and then find out that, now in in the new state diagram, if you have three extra states, simply you have to add those corresponding rungs. If you have some extra transitions, you have to add those rungs and if you have to redirect some transitions to now new states, then then you have to code that logic, that is the that transition should now come in that state logic. So you absolutely know that, which are the places where you have to make change, these are the very standard benefits of structured programming. Then it is not necessary that all the times every rung has to be computed, in fact only a few rungs are actually active and the other rungs are inactive. So there is no point you can save lot of computational time by skipping those. So so so this needs to be done that will improve computational performance and it supports concurrency that is very important that it that is suppose concurrency, because it very often happens that that there are there there are many things several things which will be taking place together and they which are which are best modeled as concurrent and especially now that we have you know this so called multitasking operating systems or or executives running on microprocessors. So there is there is no reason why this cannot be done, I mean we have the technology to enable it. So therefore we we must use it. So so therefore, we have a formalism which is which is standard and which are called sequential function charts. So a sequential function chart is actually a graphical paradigm for describing modular concurrent control program structure. So remember that it it is basically used to describe structure and not the program itself. So which part so as if you have organized your program into a into a number of well written functions, which are modularly arranged. So you just using the sequential function chart, you are just describing that which of these functions will be executed when and under what condition. So, so you have this remark which says that the SFC merely describes the structural organization of the program modules, while the the actual program statements in the modules still have to be written probably using existing PLC programming languages something like you know either RLAL or instruction list or whatever. So if let us look at quick look at the basic SFC constructs, they are they are basically state machine like constructs and and contain some constructs which are which are simple simple programming construct with which we are familiar. So, you have basically just like states and transitions, here you have steps and transitions. So, each step is actually a control program module which may be programmed in RLL or any other language. So, as I say it is a function. Similarly, this steps can be of two types, number one initial step, I had remarked the initial step and the initial step execution can be of two types. One is when the first time we are executing after power on and second is if if if a program reset actually sequential function charts have some standard instructions which will which will reset which are assumed to be reset assumed to reset the program. So, after reset also you can execute a particular type of step called the initial step typically used for initializing variables and states and intermediate variables. Otherwise you have regular steps, we which one of them are active at any time actually depends on the transition logic as we have seen. So, when a step becomes inactive its state is initialized and only active steps are evaluated during the scan that saves time. Now we have transitions, so in transition each transition is also a control program module which evaluates based on available signals, operator inputs, etcetera, which transition conditions are getting enabled. So, once a transition variable evaluates it true, then the steps following it are activated. And we say steps, because we in this formalism, typically we have seen that one transition can go to only one step, provided you do not have concurrency, but since SFC is support concurrency, so it may happen that after a one transition, two concurrent you know threads of execution will start. So that is why we have said steps following it are activated, and those preceding it are deactivated. Only transition following active states, so again when you are in an in a state already which is active, only certain transitions can occur. So therefore, only those transition need to be evaluated, so only those are evaluated. A transition can also be a it can either be a it can either be a program itself having embodying very complicated logic or it can be a simple variable like in our example which is simple enough. So, for example in this case it says that S 1 is an S 1 is actually an initial step and then if transition while it is in while S 1 is active, T 1 is active and if the conditions for T 1 are fulfilled then it will come to S 2 if they if a while at S 2 if T 2 will become also active and and if T 2 occurs then it will go to S 3. So, you see that if you see the activations in scan 1, it may so happen that S 1 and T 1 are active, while in scan 2 it may happen that T 1 has fired. Once T 1 has fired, that is actually take taken place because T 1 is active, so it has come to S 2 and then T 2 is active. So, T 1 is So, if T 2 is active, it is it is it is continuously evaluated, it may happen that in scan 3, T 2 is evaluated to be false. So, these two continue to be active, while at scan 4, it may happen that T 2 is now actually become true. So, therefore S 3 has become active and then some any transition outgoing from it will become active. So, this is in this way certain parts of the program at a time becomes active as the system moves through the sequential function chart. For example, there you can describe several kinds of you know programming constructs. For example, this is an I am sorry. So, for example, this is a this is a case where the system if at s 1 While the system is at S 1, T 1 and T 2 are active, so any one of them can become true, and if T 1 is true, S 2 the next active state will be S 2, if T 2 is true, the next active state will be S 3. While it could conceivably also happen that, S 2 T 1 and T 2 in a in a given scan, T 1 and T 2 both become active. So in that case, you have to resolve, because because both transitions cannot take place simultaneously. So, you have to say that, so there is a there is a convention that the left most transition becomes takes place actually takes place. So, if T 1 and T 2 will both become active at a same time true at a same time, then T 1 is assumed to have taken place. So, that was a selective or alternative branch, that is the the the a selective or alternative branch that is the the the system can either flow through this branch or flow through this branch but not both of them simultaneously. So as I said that if S 1 is active, T 1 is true and then S 2 becomes active, if T 1 is false and T 2 is true then S 3 becomes active and then the moment S 3 becomes active, S 1 becomes inactive. Similarly, left to right priority as I said and only one branch can be activated If S 4 is active and T 3 becomes true, S 6 becomes active and S 4 becomes inactive, same thing. Similarly, just like selective or alternative branches, you can also have simultaneous or parallel branches. So, this is the new construct which is which we which is not possible to be implemented in a normal state machine. So, this is the construct that helps concurrent control programs. The actual realization how you will run concurrent executions can be made using various you know operating system methodology such as multi tasking. So, here you have a parallel branch. So, when you see that at S 1 if T 1 occurs then both these states become active together and then at some point of time S 4 becomes active. So, but T 2 cannot take place unless this branch also comes to S 5. So, only when S 4 and S 5 will become active, at that time T 2 will become active and it will be evaluated. So, if it evaluates to true then it will come to S 6 and then S 4 and S 5 will become inactive. So, if that is that is same thing is written here, if S 1 is active and T 1 becomes true, S 2 and S 3 become active and S 1 becomes inactive. If S 4 and S 5 are active and T 2 becomes true, then S 6 becomes active and S 4 and S 5 inactive, okay. Similarly, you can have you know among program control statements, you can have jumps. So, for example, you have a jump where after S 1, S 2 after T 3, it will go back to S 1. Similarly, you can have a jump inside the loop, you can free I mean jumping here, you can either come to S 5 or you can come to S 1. So basically you can execute this kind of program control statements using SFCs. So jump cycle, jump occurs after transitions and jump into or out of parallel sequence is illegal, because you cannot jump out of one sequence when you are in parallel. You must enter them together in any parallel sequence, while you are executing the sequence, may be in different states at different arms, but you must enter them together and you must leave them together. Similarly, last you may have termination of a program, so step without a following without a following transition, if you have a step out of which there is no transition, then it is a dead state and it once it is activated, it can only be deactivated by an special instruction called SFC reset. You can nest programming, you can nest a concurrent coming you can nest a concurrent loop within a concurrent loop. So here if T 1 occurs S 2 and S 3 then from S 3 if T 2 occurs then simultaneously S 2, S 4 and S 5 can become active and if all of them are active then T 3 will take place and then S 6. Similarly you see here what is happening this exactly the same thing is happening that you here you have a selective branch. So, you can have you can you can either go into S two through T one or go into S three through T two, that is a selective branch. While you are in a selective branch, you can execute a parallel branch and then if when S four and S five are both active at that time, if T two evaluates to true, then you will come to a jump which will again take you to S 1. So, such control flows you can specify. So, we have we are we are we have come to the end of the lesson. In this lesson, we have what have we learnt? We have learnt the broad steps in the in the sequence control design and these are very important that first of all identify the inputs and outputs, then very critically study the system behavior, look at the requirements for manual control, operator safety faults etcetera and then try to formalize this description. So first may be write the operation in crisp steps in English and then try to convert this this this description of steps into into a into a into some sort of formalism and we have in this lecture, we have seen the formalism of sequential, I mean state machines which can be programmed using a sequential function chart. So this this this this this modeling we have seen that how to model a particular application, we took a very simple industrial application like a stamping process and built its state machine. And then we have shown that how given the state machine, how very mechanically you can arrive at at a how you can you can actually structure your program and then write the transition logic and the state logic and the output logic. And we have seen that this is very much expected to lead to you know error free programs. And finally we actually found a syntax, I mean some some language graphical language, which can actually capture this this flow of logic in terms of states states and transitions. So this program control or the program flow description strategy using sequential function charts we have seen. So now before ending, I think it is nice to look at some problems. So for your for your examples you can try you can try to create, let us look at exercise problem number one. So the problem number one says, create an RLL program for traffic light control. So what happens that the lights should have cross walk buttons for both direction of traffic lights. A normal light sequence for both directions will be green, a normal light sequence for both directions will be green 16 seconds and yellow 4 seconds. If the cross walk button has been pushed, then a walk light will be on for 10 seconds. If the cross walk button has been pushed. Then a walk light, so will be on for ten seconds and the green light will be extended to twenty four seconds, right. So this is see this much of description is only given. Now while while so what is the what is the task the first task would be to to develop a state machine description and probably write it in terms of write it in terms of an an SFC. And then while while while going to do this this SFC, we will actually encounter we can encounter several problems. For example, we may find that things are not somethings are not state. For example, let us look at let us look at the SFC of this problem. So, we will for the time being we will skip problem number two or or okay let us look at problem number two next and then we will go to the SFC. So, the next problem says design a garage door controller using an SFC. So, the behavior of the garage door controller is is like this. There is a single button in the garage and a single button remote control. So, the controller When the button is pushed, the door will pushed once, when the when the button will be pushed once, then the then the door will move up or down. So, you push it once, it will move up or down. If the button is pushed once while moving, then the door will stop and a second push will start motion again in the opposite direction. So, you have a single button which you are going to push. So you so you press it once, it might go up, press it again it will stop, press it again it will reverse. So you have only one single button, so you see that this is purely sequential behavior. So unless you have a concept of state, you can never you you are actually pressing the same button, but sometimes it is stopping, sometimes it is moving up, sometimes it is moving down. And this kind of logic it is not possible to model using only input output, then you you have to bring in the concept of memory at state, as you might have noticed when you have designed digital logic circuits. So, you have sequential when you have whenever you have memory you have sequential logic. So, let us look at then there are top bottom limit switches to stop the motion of the door, obviously you have pressed a button it is going up it has to stop somewhere. So, you have to have limit switches, there is a light beam across the bottom of the door, if the beam is cut while the door is closing, then the then the door will stop and reverse. So, if you have while the door is closing, suppose if you if you put something below it, then then then that is you are trying to ensure that the door can actually close, because otherwise what will happen is that the door will get stuck, if you have kept anything there, suppose the back of your car is actually sticking out, then the door will go and hit your the door will go and hit your car. So so the door will close fully only if there is a light beam which is not interrupted, which means that the there is clearance. Otherwise the if the light beam is interrupted any time, immediately the door will stop and reverse, it will think that there is something it cannot close it. There is a garage light that will be on for five minutes after the door opens or closes. So, so you have a garage light which can be open for five minutes after the door opens or closes. So, you have to use the timer here. So, so any anytime you do an operation, the light is going to go on and stay five minutes, okay. So we have the two, let us first look at the next one which is simpler. So, here you have the garage door. So, let us see how they have modeled it. So, you have in this case, so if if button is pressed and so you have step one going to step two going to step three, this is the closed door. So, if let us see let let us go back. So, there is a single button in the garage and a single button remote control. So, you can have either a button pressed in the garage or you can have a button pressed in button pressed from the remote, right. So, there are two kinds of buttons. So, you kinds of buttons. So, you have either a button pressed from the garage or from the remote, then we go to step three and we start closing the door, this is the this is the output. On the other hand, if button has been, so if from there if either a button has been pressed either from local or from remote or if so if we if we press a button what what will happen? Then the then the door will stop. On the other hand, if the if the limit switch is made, what will happen? The door will stop. So, if either a button has been pressed or the bottom limit switch is reached, then immediately the door will stop. Or if the light beam is interrupted, then it will not only stop, it will actually reverse, right. So, immediately it is going to reverse. Here what happens is that, if you if you have pressed a button and to stop it, then if you press it again, then it will go to reverse. So this is a, you see we have we have captured the behavior of the garage door in the in this form using SFC. So you can do you can do a similar thing also for the traffic light that let that be an an an exercise. So that brings us to the to the end of this lesson. Thank you very much and see you again for the next lesson, bye bye. Welcome to lesson twenty two. So far we have learnt about the basic functioning of a PLC, we have learnt how to write a program for it, but all this time we have seen the PLC an abstract device. We have not seen what is inside a PLC system, what actually physically makes it. So, in this lesson we are going to look at the hardware environment of the PLCs, basically what PLC systems are made of, right. So we are going to look at components, we are going to look at components, we are going components of the PLC system, their architecture by which I mean that how they are organized, how they are connected and their functionality that is describing what they do. So now, so let us see. Before we begin, it is customary to see the instructional objectives. So the instructional objectives are the following. First, after going through this lesson, one should be able to mention the distinguishing features of industrial automation computational tasks. A PLC basically is a is a computer, so it computes computation it performs computational tasks, related to industrial automation. But these tasks have certain very distinguishing features compared to the other tasks, which are let us say done in an in an in an office environment. So, we are going to you should be able to mention some of these features, mentioned major hardware components of a PLC, described major typical functional features and performance specifications for the CPU, central processing unit, I O or input output, MMI, demand machine interface and communication modules. Then explain the advantages of a function module. So, what are the advantages of a function module, which you will be able to explain and describe some major typical function module types. So, these are the instructional objectives. The program execution in these systems is very interesting, there are there are generally three four different types of program execution modes specified. For example, a mode could be cyclic, by cyclic we mean that you have a you have a mode, you have a number of computational task begin just like R L L program execution. So, begin here come to the end start all over again. So, this just goes on typically cycle time remains more or less constant, but it could vary little bit depending on the program logic. For example, if you have a if you have some program control statements, if then else kind of statements then whether some block will be executed or not will will actually depend on the data. So, program execution time is not always constant, it it actually depends on data, but roughly it will be constant. In fact it is it is preferable that it is constant, so that you are not surprised that that for some value of data, suddenly your suddenly your program execution takes a very long time and your deadlines are missed. So, it is it is preferred that real time programs have predictable and not too much varying time requirements. The advantages of distributed network IOR well understood cost saving on maintaining integrity of high speed signals, because digital basically the advantages of digital communication and the advantages of having an intelligent module near the machine. So, you can have good sensor diagnostics, false fault can be much more you know monitoring functions can be realized without overloading the CPU, you can do special functions like like start up. So, in a sense in such cases the PLC CPU really works like a supervisory system and the actual control system on the spot. So, you have better centralized coordination monitoring. So, we have come to the end of the lecture and so we have I hope you have got a fairly a fair idea about what makes a PLC system and as is customary gain you have some points to ponder. So, think of think whether you can mention two distinguishing features of industrial automation task compare to let us say a task in a bank, with which are also computational tasks, which also communicate. Mention five major components of a PLC system, we have mentioned more than five, so you should be able to mention five and distinguish between normal distributed and networked I O. So, here we end today. Thank you very much, we will meet again. . .", "transcription_large_v3": " So, welcome to lesson 21 of the course Industrial Automation and Control. In this lesson, we we are going to learn a structured design approach to sequence control. So far, we have mainly seen the programming constructs, have seen small program segments, timers, counters. In this lesson for the first time, we will see that given a practical problem, how to study the problem, what are the steps that you go through to finally arrive at an RLL So, and this will be followed using a very systematic approach, because as I have already told you that industrial control applications are very critical. In the sense that they if you have programming errors in them, they can be very expensive in terms of money or in terms even can cost human lives etcetera. So, it is always good to have a very systematic design process by which you can decompose a problem and then finally arrive at a solution. So, we will look at the instruction objectives, the instruction or objectives of this lesson are firstly to be able to model simple sequence control applications using state machines. State machine is actually a formal method and we advocate we use a formal methods, because English can be very ambiguous sometimes contradictory also. So, we have to model it using methods which have which are unambiguous consistent do not contain contradictions and are also easy to understand and develop. Then from these formal models we have to develop RLL programs for such applications and for doing this there are certain apart from the RLL programs there are some modern programming constructs which are being made available one of them is the SFC or the sequential function chart. So, we will take a look at that and also understand some of its advantages. So, that is the these are the instructional objectives of this lesson. So, now let us go through the steps in basic broad steps in sequence control design. So, first step is to study the system behavior this is a very critical step and most of the errors that happen in any programming exercise not only this kind of industrial automation programming any programming mainly arises from the fact that the programmer or the developer did not understand the system well. So, this is a very important step and one must first of all identify inputs to the system that is the programmable controller program, what inputs it will take inputs can come from either from sensors in the field or it comes from operator interface which I call the MMI or the man machine interface. So, somebody presses a push button that is an operator input, on the other hand some limit switch is made that is a sensor input. Similarly identify the outputs, so switch on motor, so motor is an actuator that is a kind of output. There is another kind of output for example, switch on some indicator or some lamp that would be an output which again goes to the MMI or the man machine interface. So, we have to first identify these. Then study the sequence of actions and events under the various operational modes, this is the main task. You have to very carefully understand what is going to happen and what will happen after what, at what time intervals etcetera. Then one thing that must be very clearly remembered is that, when you are developing an industrial automation program, one not only has to remember, not only has to design for normal behavior, but one must to some extent at least take into account the possible failures that can occur otherwise a system that behaves well under normal behavior can behave in a very nasty manner if some simple element of the system like a sensor fails. Then even apart from the automated behavior one has to examine the requirements that exist for number one manual control. Manual control is very important because for finally, if the automation equipment fails it should be able to operate the system using manual control right may be right on the field. While the automated control may be actually working quite a distance away from the actual equipment it may be housed in some control room. On the other hand, the manual controls may be near the equipment at the field. So, the possibility of including such manual controls must be examined. Whether some additional sensors are required, some sensors may be there, but to achieve a kind of functionality some other sensors may be needed. Indicators, alarms and as well as operational efficiency or safety, these are the factors which must be considered to finally arrive at the functionality. One must always remember that the customer may not always be able to express his or her needs and a good automation engineer should be able to supplement it with his own experience in such cases. So, having done that the next step is to convert generally these things are captured manually and using a linguistic description something like you know something like English statement. So, you talk to customers, talk to engineers on the field and get their requirements, but this is very dangerous to use for program development. So, we have to convert this linguistic description into formal process models and in fact, a lot of you know inconsistencies, which are there ambiguities, which are there in the linguistic description actually surface at this time. Then for the during the process of transforming it into a formal process model, one may initially use intermediate forms like you know like flow charts for example. Then finally, but finally it is prescribed that one should be able to convert it to a formal mathematical frame work something like let us say a finite state machine, which we will be using here. up to this the operations are manual having done that then one has to go for design of the sequence control logic based on the formal model. And then finally, one has to implement the control logic in the form of an RLL program and it is preferable that these steps especially the step D is made as much as possible automatic. Because, this is a step which can be done in an automated manner once B and C have been carried out. And for large programs it is always preferable to go for automated programming, because that will always lead to error free programs provided your specifications were correct. So, we come back to our old stamping process example, which we have seen in earlier lectures. So, here is a stamping process we know this process. So, we will we have made some addition to its functionality to be able to explain you know certain features of a system and make it more complete. So, basic principle is the same that there is a piston which and there are two solenoids hydraulically driven piston which goes up and down and makes stampings. So, if we now try to write its list of actions, try to create a linguistic description of the process operation, it will look something like this. So, in step a, it says that if the auto push button is pressed, so that is an operator input, it turns powers and lights on. So, there is possibly a switch once one or more switches, but we will consider it to be 1, which will turn the power and the light on, I mean the moment the auto button is pressed. When a part is detected, the thing to be stamped when it is detected, it means placed at the proper place and detected. So, there has to be a part sensor, the press ram the thing that will be heavy piece, which will move and make a stamping, it advances down and it will stop once it makes the bottom limit switch. So, that is again a sensor and you must have actuators to make the RAM move down, then the press then retracts up to the top limit switch and stops. So, it makes a stamping and stops. On the other hand, due to some reason the operator may be may like to abort a stamping operation. So, there is a stop push button provided to the operator and a stop push button stops the press only when it is going down, when it is going up it has no effect, because anyway that is not going to cause any problem. If the stop push button has been pressed it means that something abnormal might have happened. So, the reset push button must be pressed before the auto push button can be pressed for the next cycle of operation. So, once you have pressed stop you have to press reset, you know it is a kind of acknowledgement that the emergency has gone away and the automated operation can resume. Finally, after retracting the after retracting and then going up the press waits till the part is removed and the next part is detected. So, till the part will be removed and then after that when the next part will be detected again the RAM will start coming down. So, this is the English behavior of the system. So, now let us try to convert it to an unambiguous mathematical description. So, first step as I said is to get the process inputs and outputs. So, what are the process? So, here are the process inputs and outputs. So, as inputs we have part sensors, which gives two kinds of, which generates two kinds of events, one is part placed another is part removed. Then, there are three kinds of push button, the auto push button, the stop push button and the reset push button, these three are operator inputs. Then, there are the two limit switch sensors, bottom limit switch and top limit switch. For outputs, we have four outputs, we have an up solenoid, which moves the RAM up we have a down solenoid which moves it down, we have the power light switch and we have a part holder which holds the parts while it is being stamped. So, these are our outputs. Now we develop a state machine, so let us try to interpret this diagram. So, what is happening in this diagram is that, let me select my pen. So, what is happening is that, you see these squares are the states, we are possibly familiar with state machine. So, state machine is like a graph, which consists of a set of states, these squares are the states and a set of transitions. For example, this is a transition, this is not a good color go back to white. So, this is a transition and this is a state. So, this is a transition and this is a state. So, what the system does is that system actually during its life cycle or during its activity, the system actually moves from states to states through transitions. So, it actually spends time most of the time in the states and transitions are generally assumed to be momentary that is it is assumed that insignificant amount of time is required to change states. So, you see that it says that initially the when you have double square it means that that is the initial state. So, if initially if the auto push button is pressed this is the transition A, which gets activated which will take place and take the system from state 1 to state 2, if the auto push button is pressed. So, this is the transition condition you can have much more complicated conditions in this case we have very simple conditions. And then if this transition occurs then this system comes to state 2, in state 2 again if this transition B takes place whose condition is part placed it will come to state 3. So, in this way depending on how the sensors are bringing in signals from the field the various transitions will be enabled and the system will hop from state to state that is the behavior of the system. On the other hand, these green rectangles indicate that at each state which are the outputs which are on. For example, you can see that in state 1 nothing is on, none of the outputs are exercised while in state 2 the power and lights are on, in state 3 the down solenoid is on, actually this is the state when the solenoid is coming down. So, you see that this is the initial state, here the system is switching on the power and the light and possibly waiting for part place signal to come. So, it might spend some time here, then it is coming down, so takes time there. Then from here, it could either go this way or go this way and depending on which one of this have come. So, it may so happen that the bottom limit switch, if the stop push button has not been pressed, then eventually it will the bottom limit switch signal will come and then it will come to step four in which it will activate these outputs. On the other hand, if before the bottom limit switch is pressed, if the stop push button is pressed then it will come to this state where it will simply stop and put the power and light off. So, you see so this is the way using a graph of nodes and edges we can describe the behavior of the system unambiguously. So, now, this is what we know what we call the state transition diagram and these are the outputs which are exercised at different and that is actually also in captured in what is known as an output table. So, the output table says that among the four outputs that we have namely power, light, power hold, power light switch, power hold, up solenoid and down solenoid, which are what are their status whether they are on or off at in the various states. So, there are 6 states and there are 4 outputs. So, it says that the power light switch stays on in state 2, 3, 4 and 6, while the power hold stays on only during 3 and 4, up solenoid is 1 during 4 and down solenoid is 1 during 3. Having done that we can start developing our. So, you see we have seen that the as the system moves on the various state logics and transition logics are alternately computed. So, first there is a state in which some state logic will be satisfied, depending on that outputs will be exercised, after that at some time some transition logic will get satisfied. So, now the system will come to a different state, so the previous state logic is going to be falsified and the new state logic will now become true and then based on that the corresponding outputs will get exercised. So, this we have to now capture in a relay ladder logic program. So, we will organize our program into three different blocks, the first block will contain, so may be I will choose this pen now. So, the relay ladder logic will consist of three different blocks. The first block will contain the transitions, This is multiple rungs, then the state block and finally, the output block. So, we will now describe these three blocks in the case of this example. So, let us first see the transition logic. For example, what does it say, it says that if when will transition a logic will be satisfied, the transition a logic if you recall brings the system from state 1 to state 2. So, if you wanted to see that we could go back just for once. So, you see here transition A takes the system from state 1 to state 2, transition B takes it from state 2 to state 3, transition C and D are in parallel it could take state 3 to either 4 or 5. So, let us remember this and then go ahead. So, it says that if the system is in state 1, if it is in state 1 that depends on the state logic. Then the corresponding to the state corresponding to every transition we have an output coil and corresponding to every state we have output coils. So, this is actually an auxiliary contact corresponding to the output coil called state 1. So, it is an abstract variable actually. So, it says that if it the system is in state 1, then this contact will be made and at that point of time if the auto push button signal comes, then transition A will get enabled. So, it will be on. So, if you have modeled, if you have modeled our system well then at a time only one transition will get on. If you have if you do not consider concurrency then at a time only one transition will get on. Now, what will happen? Now, in the next stage, so now transition A becomes on and state 1 was already on. So, at this point of time we come to the state logic. So, now let us see what happens in the state logic. In the state logic see state 1 was on right. Now, because state 1 was on and because auto push button was pressed transition A became on. So, the computation came from the transition logic to the state logic. So, what happens is that it found state 1 is on. So, what happened is that it found that the transition A is on at this point of time it found this transition on, because transition logic has been already evaluated and it has been found to be true. So, therefore, this auxiliary contact will be closed. So, therefore, now state 2 will be on. Now, once state 2 is on two things happen, you see in the next because state 2 is on this will be on this contact will be on and this contact will now be off. So, in the next cycle in the next scan cycle when this rungs will be evaluated this will go off and because and transition A will because see this will go off and this will go off therefore, because transition A will go off. So, therefore, state 2 this can go off it does not matter because this is on. So, therefore, this will stay on. So, therefore, it says that now the system is in state 2. Now, when in this way again when state 2 is on at that time in the next cycle some other transition will become enabled depending on what sensors signals are coming. So, similarly it will turn out for example, in state 3 now you at some time transition B will take place transition B means that B is let us see transition B. So, transition B is part placed correct. So, when the part will be placed then if the if that part place signal comes then what will happen is the transition B will be on and these are not yet enabled. So, therefore, state 3 will be on. On the other hand, while state 3 is on if either transition C or transition D occurs, transition D is due to the stop push button being pressed and transition C is due to the bottom limit switch being made. If one of the any one of these occurs, then it will no longer be in state 3, but it will go to either transition state 4. If transition C occurs state 3 will be falsified and state 4 will become on. On the other hand, if transition D occurs then this will be falsified and then state 5 which is not shown here will become on. So, you see that mechanically once we have developed the state graph, we can simply mechanically describe its behavior. So, corresponding to every transition we are going to have one rung, corresponding to every state we are going to have one run and as I have described we are going to put the enabling logics. So, we are going to say just from the graph that if when the when the system is at state one if auto push button is pressed it will go to state two. Simple this logic which is given from the graph will take every transition and will write the corresponding logic in the transition logic. Similarly, we will say that if transition x has been enabled then it will reach this state. So, we can do it from the graph mechanically just one by one this writing can be actually written by a program itself. So, one need not really think too much about the logic one should think about the logic while he is drawing the diagram after that the programming becomes automated this is very useful. So, now, next we will have the output coil, output logic, output logic is very simple, very very simple especially in this case. So, the output logic says that if you are in state 2, then power light switch should be on as we have given in our output table. So, only thing is that look here that we have also added some manual switch, you know it it can be sometimes we may need to check, we may need to do things manually also. So, the power light switch will be on here we have put a manual switch. So, if the PLC is running then if you press the manual switch then also power light switch may be made on. Similarly, we can have a manual down push button. So, we can this is just to demonstrate that you can put additional logic to include manual operation of the system. So, in this so otherwise this program simply says that while you are in state 3 down solenoid will have to be activated very simple. Compare this with the kind of programs that we had written earlier in fact, for this process itself we had written some programs. So, there we did not have any concept of states and transitions we were directly trying to write outputs in the form of inputs. Now, the problem with this kind of problems is that they are here systems generally have memory that is why you need the concept states. It is not that if you get a certain kind of inputs you will have produce certain kinds of outputs it depends on which state the system is in. So, the concept of state is very important and well you can bring it down in bring it possibly in certain cases using some temporary variables. But, the kind of here you see if you look at this program, this program says it is very complicated logic and I am not even 100 percent sure, it is very difficult to be 100 percent sure whether this logic is full proof. It says that if the auto mode by the way this auto mode is actually you can it is an auxiliary contact corresponding to some logical variable which you can set by a simple run that if it is auto PB and then you have an auto mode coil and then you have this is auto PB and here you can have auto mode. So, you can have a auto mode coil and this will be an auto mode auxiliary switch, so that the PB can be released. So, this is a sort of a you know persistent input. So, if this auto mode is on, so it says that this all the everything will work only if the auto mode is on and then if the bottom limit switch is made and the down solenoid is not on, then the up solenoid can will be energized. Similarly, and once the up solenoid is energized, it will remain energized until the top limit switches looks, but one is never sure and especially when problems will have 200 states, then it will be impossible to write such direct programs. Chances will be very high that if somebody wants to write it, he will make mistakes. So, this is a typical example of unstructured programming, the kind of things that we did before this lesson. Now, having said these things, so this is in very brief, this is a structured design approach to RL programming. Now, we must mention since we are coming to the end of this programming part, we must mention that there are you know certain standards of programming languages which have come. For example, IEC 1131, IEC stands for the International Electro-Technical commission and 1131 is a number. So, this is an international standard for PLC programming and it classifies languages into two types, one is one are graphical languages things like you know functional block diagrams or ladder diagrams, these are graphical languages. On the other hand, there may be some several text based languages for example, structured text or instruction list, these are some of the kinds of programming paradigms. I already told that although we are learning about the RLL, there are several other programming paradigms which are also supported by various manufacturers and these are typical examples of them. So, for now we have understood that this state based design is very useful. So, therefore, to be able to capture this state transition logic a separate language has been proposed, separate methods of programming, advance programming have been proposed and we are going to take a brief look at that. So, some of the merits of this kind of you know advance programming, which takes this kind of abstract state transition logic, firstly because they have open standards in the sense that anybody can write a program, which can be used by somebody else. Then structuring of programs in terms of module helps in program development and program maintenance very important program upgradation you want to add a new functionality you will find that it is very simple just change the state diagram and then find out that now in the new state diagram if you have three extra states simply you have to add those corresponding rungs. If you have some extra transitions you have to add those rungs and if you have to redirect some transitions to now new states, then you have to code that logic that is a that transition should now come in that state logic. So, you absolutely know that which are the places where you have to make change, these are the very standard benefits of structured programming. Then it is not necessary that all the times every rung has to be computed, in fact only a few rungs are actually active and the other ranks are inactive. So, there is no point you can save lot of computational time by skipping those. So, this needs to be done that will improve computational performance and it supports concurrency that is very important. That is, it supports concurrency because it very often happens that there are many things several things which will be taking place together and they which are best modeled as concurrent. especially now that we have you know this so called multitasking operating systems or or executives running on microprocessors. So, there is no reason why this cannot be done I mean we have the technology to enable it. So, therefore, we must use it. So therefore, we have a formalism which is standard and which are called sequential function charts. So, a sequential function chart is actually a graphical paradigm for describing modular concurrent control program structure. So, remember that it is basically used to describe structure and not the program itself. So, which part, so as if you have organized your program into a number of well written functions which are modularly arranged. So, you just using the sequential function chart you are just describing that which of these functions will be executed when and under what condition. So, you have this remark which says that the SFC merely describes the structural organization of the program modules, while the actual program statements in the modules still have to be written probably using existing PLC programming languages something like you know either RLL or instruction list or whatever. So, if let us look at quick look at the basic SFC constructs they are basically state machine like constructs and contain some constructs which are simple programming construct with which we are familiar. So, you have basically just like states and transitions here you have steps and transitions. So, each step is actually a control program module which may be programmed in RLL or any other language. So, as I said it is a function similarly, this steps can be of two types number one initial step I had remarked the initial step. And the initial step execution can be of two types one is when the first time you are executing after power on and second is if a particular if a program reset actually sequential function charts have some standard instructions which will which will reset which are assumed to be reset assumed to reset the program. So, after reset also you can execute a particular type of step called the initial step typically used for initializing variables and states and intermediate variables. Otherwise you have regular steps we which one of them are active at any time actually depends on the transition logic as we have seen. So, when a step becomes inactive its state is initialized and only active steps are evaluated during the scan that saves time. Now, we have transitions, so in transition each transition is also a control program module which evaluates based on available signals operator inputs etcetera, which transition conditions are getting enabled. So, once a transition variable evaluates it true, then the steps following it are activated. And we say steps, because we in this formalism typically we have seen that one transition can go to only one step provided you do not have concurrency, but since SFC is support concurrency. So, it may happen that after a one transition two concurrent you know threads of execution will start. So, that is why we have said steps following it are activated and those preceding it are deactivated. Only transition following active states, so again when you are in an in a state already which is active only certain transitions can occur. So, therefore, only those transitions need to be evaluated, so only those are evaluated. A transition can also be a it can either be a it can either be a program itself having embodying very complicated logic or it can be a simple variable like in our example which is simple enough. So, for example, in this case it says that S 1 is an S 1 is actually an initial step and then if transition while it is in while S 1 is active T 1 is active and if the conditions for T 1 are fulfilled, then it will come to S 2, if they if while at S 2, if T 2 will become also active and if T 2 occurs, then it will go to S 3. So, you see that if you see the activations in scan 1, it may so happen that S 1 and T 1 are active, while in scan 2 it may happen that T 1 has fired. Once T 1 has fired, that is actually taken place because T 1 is active. So, it has come to S 2 and then T 2 is active. So, if T 2 is active it is continuously evaluated it may happen that in scan 3 T 2 is evaluated to be false. So, these two continue to be active while at scan 4 it may happen that T 2 is now actually become true. So, therefore, S 3 has become active and then some any transition outgoing from it will become active. So, this is in this way certain parts of the program at a time becomes active as the system moves through the sequential function chart. For example, there are you can describe several kinds of you know programming constructs for example, this is an I am sorry. So, for example, this is a this is the case where the system if at S 1 at while the system is at S 1, T 1 and T 2 are active. So, any one of them can become true and if T 1 is true S 2 the next active state will be S 2, if T 2 is true the next active state will be S 3. While it could conceivably also happen that S 2 T 1 and T 2 in a given scan T 1 and T 2 both become active. So, in that case you have to resolve because both transition cannot take place simultaneously. So, you have to say that, so there is a convention that the left most transition becomes takes place actually takes place. So, if T 1 and T 2 will both become active at a same time true at a same time then T 1 is assumed to have taken place. So, that was a selective or alternative branch that is the system can either flow through this branch or flow through this branch, but not both of them simultaneously. So, as I said that if S 1 is active, T 1 is true and then S 2 is becomes active, if T 1 is false and T 2 is true then S 3 becomes active and then the moment S 3 becomes active, S 1 becomes inactive. Similarly, left to right priority as I said and only one branch can be active at a time. If S 4 is active and T 3 becomes true S 6 becomes active and S 4 becomes inactive same thing. Similarly, just like selective or alternative branches you can also have simultaneous or parallel branches. So, this is the new construct which is which we which is not possible to be you implemented in a normal state machine. So, this is the construct that helps concurrent control programs. The actual realization how you will run concurrent executions can be made using various operating system methodologies such as multitasking. So, here you have a parallel branch, so when you see that at S 1 if T 1 occurs then both these states become active together and then at some point of time S 4 becomes active, but T 2 cannot take place unless this branch also comes to S 5. So, only when S 4 and S 5 will become active at that time T 2 will become active and it will be evaluated. So, if it evaluates to true then it will come to S 6 and then S 4 and S 5 will become inactive. So, if that is there is same thing is written here if S 1 is active and T 1 becomes true S 2 and S 3 become active and S 1 becomes inactive. If S 4 and S 5 are active and T 2 becomes true then S 6 becomes active and S 4 and S 5 inactive. Similarly, you can have you know among program control statements you can have jumps. So, for example, you have a jump where after S 1 S 2 after T 3 it will go back to S 1. Similarly, you can have a jump inside the loop you can see I mean jumping here you can either come to S 5 or you can come to S 1. So, basically you can execute this kind of program control statements using SFCs. So, jump cycle jump occurs after transitions and jump into or out of parallel sequence is illegal, because you cannot jump out of one sequence when you are in parallel. You must enter them together in any parallel sequence while you are executing the sequence you may be in different states at different arms, but you must enter them together and you must leave them together. Similarly, last you may have termination of a program, so step without a following transition, if you have a step out of which there is no transition then it is a dead state and it once it is activated it can only be deactivated by an special instruction called SFC reset. You can nest programming, can nest a concurrent loop within a concurrent loop. So, here if T 1 occurs S 2 and S 3 then from S 3 if T 2 occurs then simultaneously S 2 S 4 and S 5 can become active. And if all of them are active then T 3 will take place and then S 6. Similarly you see here what is happening this exactly the same thing is happening that you here you have a selective branch. So, you can have you can either go into S 2 through T 1 or go into S 3 through T 2 that is a selective branch. While you are in a selective branch you can execute a parallel branch and then if when S 4 and S 5 are both active at that time if T 2 evaluates to true then you will come to a jump which will again take you to S 1. So, such control flows you can specify. So, we have come to the end of the lesson. In this lesson, we have what have we learnt? We have learnt the broad steps in the sequence control design and these are very important that first of all identify the inputs and Then, very critically study the system behavior, look at the requirements for manual control, operator safety, faults etcetera. And then try to formalize this description, first may be write the operation in crisp steps in English. And then try to convert this description of steps into some sort of formalism. And we have in this lecture, we have seen the formalism of sequential, I mean state machines which can be programmed using a sequential function chart. So, this modeling we have seen that how to model a particular application, we took a very simple industrial application like a stamping process and built its state machine. And then we have shown that how given the state machine, how very mechanically you can arrive at a, how you can actually structure your program and then write the transition logic and the state logic and the output logic. And we have seen that this is very much expected to lead to you know error free programs. And finally, we actually found a syntax I mean some language graphical language, which can actually capture this flow of logic in terms of states and transitions. So, this program control or the program flow description strategy using sequential function charts we have seen. So, now before ending I think it is nice to look at some problems. So, for your examples you can try to create, let us look at exercise problem number 1. So, the problem number 1 says create an RLL program for traffic light control. So, what happens that the lights should have crosswalk buttons for both direction of traffic lights. A normal light sequence for both directions will be green, a normal light sequence for both directions will be green 16 seconds and yellow 4 seconds. If the crosswalk button has been pushed, then a walk light will be on for 10 seconds. If the crosswalk button has been pushed, then a walk light. So, will be on for 10 seconds and the green light will be extended to 24 seconds, right. So, this is see this much of description is only given. Now, while so what is the task? The first task would be to develop a state machine description and probably write it in terms of an SFC. And then while going to do this SFC, we will actually encounter we can encounter several problems. For example, we may find that things are not some things are not straight. For example, let us look at the SFC of this problem. So, we will for the time being we will skip problem number two or let us look at problem number two next and then we will go to the SFCs. So, the next problem says design a garage door controller using an SFC. So, the behavior of the garage door controller is like this, there is a single button in the garage and a single button remote control. When the button is pushed the door will pushed once when the button will be pushed once then the door will move up or down. So, you push it once it will move up or down if the button is pushed once while moving then the door will stop and a second push will start motion again in the opposite direction. So, you have a single button which you are going to push. So, you press it once it might go up press it again it will stop press it again it will reverse. So, you have only one single button, so you see that this is purely sequential behavior. So unless you have a concept of state you can never you are actually pressing the same button, but sometimes it is stopping sometimes it is moving up sometimes it is moving down and this kind of logic it is not possible to model using only input output. Then you you have to bring in the concept of memory or state as you might have noticed when you have designed digital logic circuits. So, you have sequential when you have whenever you have memory you have sequential logic. So, let us look at then there are top bottom limit switches to stop the motion of the door obviously, you have pressed a button it is going up it has to stop somewhere. So, you have to have limit switches there is a light beam across the bottom of the door, if the beam is cut while the door is closing then the door will stop and reverse. So, if you have while the door is closing suppose if you put something below it then that is you are trying to ensure that the door can actually close because otherwise what will happen is that the door will get stuck if you have kept anything there suppose the back of your car is actually sticking out then the door will go and hit your car. So, the door will close fully only if there is a light beam which is not interrupted which means that the there is clearance. Otherwise, if the light beam is interrupted at any time immediately the door will stop and reverse they will think that there is something it cannot close it. There is a garage light that will be on for five minutes after the door opens or closes. So, you have a garage light which can be open for five minutes after the door opens or closes. So, you have to use the timer here. So, any time you do an operation the light is going to go on and stay five minutes. So, we have the two let us first look at the next one which is simpler. So, here you have the garage door. So, let us see how they have modeled it. So, you have in this case, so if button is pressed and so you have step 1 going to step 2 going to step 3, this is the closed door. So, if let us see let us go back. So, there is a single button in the garage and a single button remote control. So, you can have either a button pressed in the garage or you can have a button pressed in button pressed from the remote. So, there are two kinds of buttons. So, you have either a button pressed from the garage or from the remote then we go to step three and we start closing the door this is the output. On the other hand if button has been, so if from there if either a button has been pressed either from local or from remote or if, so if we press a button what will happen then then the door will stop. On the other hand, if the limit switch is made what will happen? The door will stop. So, if either a button has been pressed or the bottom limit switch is reached, then immediately the door will stop or if the light beam is interrupted, then it will not only stop, it will actually reverse. So, immediately it is going to reverse. Here what happens is that, if you have pressed a button and to stop it, then if you press it again then it will go to reverse. So, this is a you see we have we have captured the behavior of the garage door in the in this form using SFC. So, you can do you can do a similar thing also for the traffic light that let that be an exercise. So, that brings us to the end of this lesson. Thank you very much and see you again for the next lesson. Bye bye. Welcome to lesson 22, so far we have learnt about the basic functioning of a PLC, we have learnt how to write a program for it, but all this time we have seen the PLC as an abstract device. We have not seen what is inside a PLC system, what actually physically makes it. So, in this lesson we are going to look at the hardware environment of the PLC's, basically what PLC systems are made of. So, we are going to look at components, we are going to look at components of the PLC system, their architecture by which I mean that how they are organized, how they are connected and their functionality that is describing what they do. So, now let us see before we begin it is customary to see the instructional objectives. So, the instructional objectives are the following, first after going through this lesson one should be able to mention the distinguishing features of industrial automation computational tasks. A PLC basically is a computer, so it computes computation it performs computational tasks related to industrial automation, but these tasks have certain very distinguishing features compared to the other tasks which are let us say done in an office environment. So, we are going to you should be able to mention some of these features, mention major hardware components of a PLC, describe major typical functional features and performance specifications for the CPU, central processing unit I O or input output MMI demand machine interface and communication modules. Then explain the advantages of a function module, so what are the advantages of a function module which you will be able to explain and describe some major typical function module types. So, these are the instructional objectives the program execution in these systems is very interesting there are generally three four different types of program execution mode specified. For example, a mode could be cyclic by cyclic we mean that you have a number of computational tasks begin just like RLL program execution. So, begin here come to the end start all over again. So, this just goes on typically cycle time remains more or less constant, but it could vary little bit depending on the program logic. For example, if you have a if you have some program control statements, if then else kind of statements then whether some block will be executed or not will actually depend on the data. So, program execution time is not always constant, it actually depends on data, but roughly it will be constant. In fact, it is preferable that it is constant, so that you are not surprised that for some value of data suddenly your program execution takes a very long time and and your deadlines are missed. So, it is preferred that real time programs have predictable and not too much varying time requirements. The advantages of distributed network IOR well understood cost saving on maintaining integrity of high speed signals, because digital basically the advantages of digital communication and the advantages of having an intelligent module near the machine. So, you can have good sensor diagnostics fault can be much more you know monitoring functions can be realized without overloading the CPU, you can do special functions like startup. So, in a sense in such cases the PLC CPU really works like a supervisory system and the actual controls is done on the spot, so you have better centralized coordination monitoring. So, we have come to the end of the lecture and so we have I hope you have got a fairly a fair idea about what makes a PLC system and as is customer again you have some points to ponder. So, think of think whether you can mention two distinguishing features of industrial automation tasks compared to let us say a task in a bank with the which are also computational tasks, which also communicate. Mention five major components of a PLC system, we have mentioned more than five. So, you should be able to mention five and distinguish between normal distributed and networked I O. So, here we end today, thank you very much we will meet again. Thank you. Thank you." }, "/work/yuxiang1234/prompting-whisper/audios-1/VmpUR_vsKpY.mp3": { "gt": "Hello and welcome to lecture number 20 of this lecture series on Turbomachinery Aerodynamics. We have we have probably half way through this course, and I guess you must have had some good idea about what in, what is involved in turbomachinery analysis, and what is involved in design of different types of turbo machines, especially the compressors. Now, starting the last lecture onwards, we are now looking at the axial turbines, and of course subsequently we were also in talking about the radial turbines and so on. So, I think in the last class, you must have had got some introduction to what axial turbines are, and what constitutes axial turbines and so on. So, let us take that discussion little bit further, in today’s class where we will we will be talking about two-dimensional analysis of axial compressors, well axial turbines. In a very similar fashion to what we had discussed for axial compressors. If you remember during one of the initial lectures probably the lecture - the second lecture or the third lecture, we had been talking about axial compressors, and how one can analyze axial compressors in a two-dimensional sense. So, we will we will carry out the similar analysis and discussion in today’s class about how the same thing can be carried out for turbines, axial turbines in particular. In today’s class we basically being going to talk about the following topics. We will initially discuss, have some introduction to axial turbines, turbines in general. We will talk about impulse and reaction turbine stages which are the two basic types of axial turbines. We will then talk about the work and stage dynamics, how you can calculate work done by a turbine, and how is it different for impulse and reaction turbines. We will then spend some time on discussion about turbine blade cascade. We will assume that the nomenclature we had used in the case of compressors, will still be valid, but of course we will just highlight some simple differences between a compressor cascade, and a turbine cascade, but the nomenclature remains the same in the sense that what we had called as camber or stagger or incidence all that remains the same for a turbine. So, I will probably spend lesser time discussing about those and take up some more topics on cascade analysis which we had not really covered in detail in compressors. Now, when we talk about turbines, you must have had some discussion, some introduction to some the different types of turbines. As we know the different types of compressors like axial and centrifugal; similarly, we have different types of turbines as well. Now, in a turbine just like in a compressor, we have different components. In a compressor, we know that we have rotor followed by a stator. In the case of turbines we have a nozzle or a stator which pre-seals a rotor. So, a nozzle or a stator guides and accelerates the flow into the into a rotor, and of course, the work extraction takes place in the rotor. And which is unlike in a compressor, where it is the rotor which comes first and drives the flow, and then that goes into a stator which again turns it back to the acceleration and so on, diffusion takes place in both the rotor and the stator. In a turbine, as well you could have differential amounts of acceleration or pressure drop taking place in the rotor and the stator. And there are certain types of turbines where the entire pressure drop takes place only in the stator, this rotor does not contribute to any pressure drop, it simply deflects the flow, these are called impulse turbines. We will discuss that in little more detail in some of these later slides. So, basically the flow in a turbine is accelerated in nozzle or a stator. And it then passes through a rotor. In a rotor, the working fluid basically imparts momentum to the rotor, and basically that converts the kinetic energy to power output. Now, depending upon the power requirement, this process obviously is repeated in multiple stages, you would have number of stages which will generate the required work output, which is also similar to what you have in a compressor, where you might have multiple stages which basically are meant to give you the required pressure rise in typical axial compressor. Now, we have seen this aspect in compressors as well that due to the motion of the rotor blades, you have basically two distinct components or types of velocities. One is the absolute component or type of velocity and the other is relative component or relative velocity. This was also discussed in detail in compressors, and so you would in in a turbine the analysis that we will do in by analyzing the velocity triangle. You will see that there are these two distinct components which will become obvious, when we take up the velocity triangles, and this is very similar to what you had discussed in compressors. So, if you have understood velocity triangle construction for an axial compressor, it is pretty much the same in the case of a turbine as well. So, that will that is probably the reason why it will make it simpler for you to understand the construction of the velocity triangle. Now, the fundamental difference between compressor and the turbine is the fact that the compressor is required to generate a certain pressure rise, there is a work input into the compressor, which is what is used in increasing the pressure across the compressor. Compressor operates in an adverse pressure gradient mode; that is the flow always sees an increasing the pressure downstream. In the case of a turbine, it is not that case, it is the other way round that the flow always sees a favorable pressure gradient, because there is a pressure drop taking place in a turbine which leads to a which is how the turbine extracts work from the flow. That is it converts part of the kinetic energy which the flow has into work output. And therefore, in a turbine the flow always sees a favorable pressure gradient, and that is one fundamental difference between a turbine and a compressor. Now, because you have a favorable pressure gradient, the the problems that we have seen in the case of compressor like, flow separation and blade stall and seers and all that does not really effect turbine, because a turbine the flows always in a accelerating mode, and so the problem of flow separation does not really limit the performance of a turbine. So, it is possible that we can extract lot more work per stage in a turbine as compare to that of a compressor. And therefore, you would if you have noticed schematic of typical modern day jet engine you will find that there are numerous stages of compressor may be 15 or 20, which are actually given by may be 2 or 3 stages of turbine. So, each stage of a turbine can actually give you much greater pressure drop, then what we can achieve or the kind of pressure rise we can achieve in one stage of a compressor, which is why a single stage of a turbine can drive multiple stages of compressors. So, that is the very important aspect that you need to understand, because the fundamental reason for this being the fact that turbines operate in a favorable pressure gradient, compressors operate in an adverse pressure gradient. So, there are limitations in a compressor, which will prevent us from having very high values of pressure rise per stage; that is not a limitation in a turbine; and that is why you have much greater pressure drop taking place in a turbine as compared to that of the pressure rise, that you get from one stage of a compressor. So, turbines like compressors can be of different types; the compressors we have seen can be either axial or centrifugal. In the case of turbines, you get in fact in the some literatures also says we could also have mixed type of compressors axial and centrifugal mixed. Similar thing is also there in the case of turbine, you could have an axial turbine or a radial turbine or a mixture or combination of the two called mixed flow turbines. Axial turbines obviously can handle large mass flows, and obviously are more efficient as very similar analogy we can take from compressors, which have larger mass flow and are obviously more efficient. And axial turbine main advantage is that it has the same frontal area of that of a compressor. And also it is possible that we can use an axial turbine with that of a centrifugal compressor. So, that is also an advantage. And what is also seen is that efficiency of turbines are usually higher than that of compressors. The basic reason again is related to the common timer earlier that turbines operate in a favorable pressure gradient, and so the problems that flows sees in an adverse pressure gradients is not seen; there are no problems of flow separation except in some rare cases. And this also means that theoretically come turbines are easier to design, well easier is in-cord and un-cord, well in the sense that you know compressors require little more care in terms of aerodynamic design, but of course turbines have a different problem, because of high temperatures and so turbine blade cooling and associated problems; that is an entirely different problem altogether. So, aerodynamically if you have to design a compressor and a turbine, turbines would be as easier to design than compressors, just because of the fact that. You do not to really worry about the chances of flow separation across a turbine, because it is always an accelerating flow. In the case of compressors that is not the case and there is always a risk that a compressor might enter into stall. So, let us now take a look at. Now, that I have spoken lot about types of turbines and there functions and so on, let us take look at a typical axial turbine stage. So, what is shown here is a simple schematic of an axial turbine stage. So, an axial turbine stage consists of as I mentioned nozzle or a stator followed by a rotor. So, this is just representing a nozzle through which hot gasses from the combustion chamber are expanded and then that passes through a rotor which is what gives us the power output. Rotor is mounted on what is known as disc, and of course, the flow from the rotor is exhausted into either a next stage or through the component downstream which could be a nozzle in the case of a air craft engine. So, usually we would be denoting the stator inlet as station1, stator exit as station2 and rotor exit exist as station3. In some of the earlier generation turbines, the disc was a separate entity, rotor was mounted on slots which were provided on the disc, and so those separation separate mechanism for mounting rotor blades on the disc. Some of the modern day… So, it was very soon realized that having a separate disc and different blades in obviously will increase in the number of parts. So, the part count will increase tremendously. So, but with modern day manufacturing capabilities in terms of 5 axis and 7 axis, numerical machines called CNC machine, computer guided machines. It is possible for us to make them out of a single piece. And this is done in smaller sized engines now, and some of the companies have their own names for that, for example, GE called such a disc which is combination of disc and the blade as blisc. Blisc means blade and disc together machined out of a single piece of metal. And similarly, their competitors also have their own terminologies like paternity called Integrated Blade Rotor or IBR; where there is no distinct root fixture for a blade, because blade and a disc are a single component. The main advantage being that you have reduced significantly the number of parts. Whereas you would have let us say, typical turbine blade may have something like 70 to 80 blades or even more of course, mounted on a disc. So, that is like 80 to 90 parts for one stage of a rotor. Now, if you have a blisc, you have just a one component, because all the blades have been mounted on one disc. That is the tremendous advantage for in terms of maintenance aspect. But at the same time, the primary disadvantage is the fact that - if there is one blade which gets damaged, in the earlier scenario you have just to replace the blade, here it becomes impossible to replace the blade, and so then of course, you will have to do rebalancing of the disc, and if the damage is severe then the whole disc as to be replaced. Of course, there are (( )) of having integrated blade rotor concept and of course there are lot of disadvantages and advantages. But for at least smaller engines economically that is in the long run that seems to be an advantage that you have a combination of the blade and the the disc. So, having understood some of the fundamentals of turbines, let us move on to the more important aspect of analysis - the two dimensional analysis; that is to do with velocity triangles. I think we spent quite some time discussing velocity triangles for compressors. So, I will assume that you have understood the fundamentals of velocity triangles and try to kind of move on to constructing the velocity triangles just like that, unlike in compressors where I had done it step by step. The process is exactly the same as what you have done for a compressor. But of course it being a turbine there are certain differences which you need to understand. Now, velocity triangle analysis is an elementary analysis, and this is elementary to axial turbines as well, just like in the case of compressors. Now, the usual procedure for analysis is to carry out this analysis at the mean blade height, and we will have blade speed at that height assuming to be U capital U. Absolute component of velocity will be denote by C and relative component we will denote by V. And the axial velocity the absolute component of that is of denoted by C subscript a, just like in compressors, tangential components will be denoted by a subscript w. So, C w is of absolute component of tangential velocity, V w is the relative component in the tangential direction. And regarding angles, alpha will denote the angle between the absolute velocity and the axial direction, and beta denotes the corresponding angle for relative velocity. So, these are the terminologies, nomenclature that we have used, even in a compressor we will follow exactly the same nomenclature in the case of turbine as well. So, let us move directly to a velocity triangle of a typical turbine stage. So, turbine stage as we have already seen consists of a rotor, well a stator or a nozzle. It is usually refer to as nozzle in the case of turbine, because the flow is accelerated in in an stator of a turbine, and that is why it is called an nozzle, and then you have a rotor which follows a stator or a nozzle. Now, inlet to stator is denoted as station 1, exit is denoted as station 2, exit of the rotor is station 3. So, let us say there is an inlet velocity which is given by C 1 which is absolute velocity entering at an angle alpha 1, it exists the stator or nozzle with a highly accelerated flow which is C 2, you can see that C 2 is much higher than C 1, and that is exactly the reason why this is called a nozzle. Now, at the rotor entry, we also have a blade speed U; please note that the direction of this vector U is from the pressure surface to the section surface, unlike in compressor where it was other way round. Here the flow drives the blades and that is why you have the blade speed which is in this direction. This is the absolute velocity entering the rotor and relative velocity will be the vector some of these two or vector difference between these two and that is given by V 2. Alpha 2 is the angle which C 2 makes with the axial direction, beta 2 is the angle which V 2 makes with the axial direction. And just like we have seen in compressors V 2 enters the rotor at an angle which is tangential to the camber at the leading edge. This is to ensure that the flow, this is obviously when the incidence is close to 0 to ensure that the flow does not separate. At the rotor exit, we have V 3.V 3 is less then V 2 as you can see, and of course, that also depends upon the type of the turbine, whether it is impulse or reaction, and you also have C 3 here and this is the blade speed U. Beta 3 is the angle which V 3 makes with the axial direction, alpha 3 is the angle which the absolute velocity C 3 makes with the axial direction. So, if you now come go back to the earlier slide is of lecture 2 or 3, where we had discussed about velocity triangles for an axial compressor, you can quite easily see the similarities as well as the differences. I would strongly urge you to compare both these velocities triangles by keeping them side by side. So, you can understand the differences between a compressor and a turbine; at the same time, you can also try to figure out some similarities between these two components. And so it is very necessary that you have understand clearly both the differences as well as the similarities from a very fundamental aspect that is the velocity triangle point of view. So, this is a standard velocity triangle for a typical turbine stage. I am not really mentioned here, what kind of a turbine it is whether it is impulse or reaction, we will come to that classification very soon, and you will see that there are different ways, in which you can express the velocity triangle for both of these types of turbines. So, let us now try to take a look at the different types of turbines. I mentioned in the beginning that there are two different configurations of axial turbines that are possible, the impulse and the reaction turbine. In an impulse turbine, the entire pressure drop takes place in the nozzle, and the rotor blades would simply deflect the flow and would have a symmetrical shape. So, there is no acceleration or pressure drop taking place in the rotor in an impulse turbine. So, the the rotor blades would simply deflect the flow and guided to the next nozzle if there is one present. In a reaction turbine on the other hand, the pressure drop is shared by the rotor as well as the stator. And the amount of pressure drop that is shared is defined by the degree of reaction, which we will discuss in detail in the next lecture. Now, which means that the degree of reaction of an impulse turbine would be 0, because the entire pressure drop as already taken place in the stator, the rotor does not contribute to any pressure drop, and so the degree of reaction for an impulse turbine should be 0. So, these are two different configurations of axial turbines which are possible. And what will do is that we will take a look at their velocity triangles also, but before that we need to understand the basic mechanism by which work is done by a turbine. Now, if you were to apply angular momentum equation for an axial turbine, what you will notice is that power generated by a turbine is a function of well three parameters; one is of course a mass flow rate, the other parameters are the blade speed and the tangential component of velocity - the absolute velocity. So, if you apply angular momentum at the inlet and exit of the rotor, then the power generated by the turbine is equal to mass flow rate multiplied by U 2 into C w2 which is the product of the blade speed and the tangential velocity absolute at the inlet of the rotor, minus U 3 times C w3 which is again blade speed at rotor exit, and multiplied by the tangential component of the absolute velocity at the rotor exit. Now, we would normally assume that the blade speed is does not change from at a given radial plane, and therefore U 2 can be assume to be equal to U 3, and therefore the work done per unit mass would now be equal to blade speed that is U multiplied by C w2 minus C w3 or which is also equal to the from the thermodynamics point of view, there is a stagnation pressure, stagnation temperature drop taking place in a turbine, because the turbine expands the flow, and work is extracted from the turbine, and therefore there has to be a stagnation temperature drop taking place in a turbine. Therefore the enthalpy difference between the inlet and exit of the turbine would basically equal to the work done by or work developed by this particular turbine. So, work done per unit mass is also equal to C p time T 01 minus T 03, where this is basically the enthalpy difference; C p T 01 is enthalpy at inlet of the turbine, C p T 03 is the enthalpy at the exit of the turbine. Let us now denote delta T 0 which basically refers to the stagnation temperature. The net change in the stagnation temperature in the turbine delta T naught is equal to T 01 minus T 03 which is also equal to T 02 minus T 03, because 1 to 2 is the stator and there cannot be any change in stagnation temperature in the stator. Therefore, T 01 minus T 03 is equal to T 02 minus T 03. So, we now define what is known as the stage work ratio, which is basically delta T naught by T 01 and that is equal to U times C w2 minus C w3 divided by C p times T 01. So, this is basically follows from these two equations here which correspond to the work done per unit mass; one is in terms of the velocities and other is in terms of stagnation temperatures. So, a similar analysis was also carried out when we were discussing about axial compressors, and were also we had a kind of equated the work that the flow does on well work done by the compressor on the flow as compared to the stagnation temperature rise taking place in a compressor as a result of the work done on the flow. So, there are also we have defined the pressure rise or pressure ratio per stage in terms of the temperature rise across that particular stage, and the velocity components which come from the velocity triangles. Now, what you can see here is that - the turbine work per stage would basically be limited by two parameters; one is the pressure ratio that is available for expansion, and of course the other aspect is the allowable the amount of blade stress and turning that is physically possible for one to achieve in in the case of a particular turbines. So, there are two parameters; one being the available pressure ratio and other is allowable blade stress and turning. That one can achieve in a particular turbine configuration. So, in unlike in a compressor where we also had the issue of boundary layer behavior, because the flow was always operating in an adverse pressure gradient mode in compressors, in a turbine the pressure gradient is favorable. So, boundary layer behavior is generally something that can be controlled, and there are normally not much issues related to boundary layer boundary layer separation or growth of boundary layer and so on. Of course, there are certain operating conditions, and which under which certain the stages of turbine may undergo, local flow separation, but that is for only short durations. In general in a favorable pressure gradient boundary layers generally, tend to be well behaved. Now, the turbine work ratio that we had seen in the previous slide is also often defined in and as a ratio between the work done per unit mass divided by the square of the blade speed. Therefore, W t by u square which is also equal to the enthalpy rise or rather enthalpy drop in the case of turbine divided by U square, which is basically equal to delta C w divided by U or net change in the tangential velocity absolute divided by the blade speed. Now, this is an important parameter, because based on this we can understand or the differences between an impulse turbine and a reaction turbine, which is what we are going to next to take a look at what are the fundamental differences, besides of course, the fact that in an impulse turbine, flow is the entire pressure drop takes place only in the nozzle and in reaction turbine that is shared between the nozzle and the rotor. Let us take up an impulse turbine first and we will take look at the velocity triangles for an impulse turbine, and then try to find out the work ratio per stage of an impulse turbine, and related to some parameters which we can get from the velocity triangles. So, here we have a typical impulse turbine stage, a set of a row of nozzle blades followed by a row rotor of the blades. And… So, flow is accelerated in the nozzle, and so the velocity that reaches the rotor. The absolute component is C 2, and at an angle of alpha 2 with the acceleration and as the result of the blade speed U, the relative velocity which enters the rotor is V 2 which is at an angle of beta 2 with the acceleration. And in an impulse turbine, I mentioned that the rotor simply deflects the flow and there is no pressure drop taking place in the rotor, and therefore, at the exit of the rotor we have V 3 which is at an angle of beta 3 by virtue of the symmetry of the blades, we will have beta 2 is equal to minus beta 3, and velocity in magnitude v 2 would be equal to v 3. So, which we can also see from the velocity triangle shown here; C 2 is the absolute velocity and entering the rotor, V 2 is the relative velocity and the corresponding angles here alpha 2 and beta 2. Now, in the rotor we have V 3 which is equal to V 2 in magnitude, but at an angle which is different from the inlet, that is beta 3 will be negative of beta 2 in the other direction. Absolute velocity leaving the blade is C 3. Now, if you look at the other components of a velocities like this is the actual component of the absolute velocities C a, and the corresponding tangential components of the relative velocity which are obviously equal and are opposite in direction like V w2 and V w3; you can see that these are equal in magnitude, but of course the that directions are opposite, because V 2 and V 3 are in opposite directions. And C w2 is the absolute component of well tangential component of the absolute velocity are inlet, C w3 that at the exit of the rotor. So, this is typical velocity triangle of of an impulse turbine stage. And if if you take a closure look at the velocity triangles, I have mentioned that the angles beta 3 and beta 2 are equal in magnitude, but they are different by their orientations. So, beta 3 is equal to minus beta 2 which means that we have V w3 is equal to minus V w2. And the difference in the tangential component of the absolute velocities C w2 minus C w3 will be equal to twice of V w2. So, let us take a look at the velocity triangle again C w2 is this minus C w3 is equal to the sum of V w2 and Vw3, and since they are equal we have that is equal to twice of V w2, which is also equal to 2 into C w2 minus U or this is equal to 2 U into C a by tan alpha 2 minus 1. So, that is again coming from the velocity triangles, you can see that C a tan alpha 2 is this component minus U is equal to twice of this. So, the difference between the tangential component of the absolute velocity C w2 and C w3 that is delta C w for an impulse turbine equal to 2 U into C a by U tan alpha 2 minus 1. Therefore, the work ratio that we have defined earlier, for an impulse turbine that is delta h naught by U square is equal to 2 U into C a by U tan alpha 2 minus 1. We will now, take a look at what happens in the case of an of a reaction turbine and calculate the work ratio as applicable for a reaction turbine, and see the is there a difference fundamentally in the work ratio of an impulse turbine and a reaction turbine. Now, let us take a look at a typical 50 percent reaction turbine, just for simplicity. The reason why we took up a 50 percent reaction turbine is, because in a 50 percent reaction turbine the pressure drop is shared equally between the nozzle and the rotor. And therefore, the velocity triangles as you can see are mirror images of one another; the velocity triangle at the inlet of the rotor is this, where this is C 2 the absolute velocity coming in from the rotor from the nozzle; V 2 is a relative velocity and this is the blade speed. And since they are mirror images and the exit of the rotor, you have V 3 and C 3. And therefore, you can clearly see that C 2 will be equal to V 3 and V 2 will be equal to C 3, corresponding the angles alpha 2 will be equal to beta 3, and beta 2 will be equal to alpha 3. For this is true for only 50 percent reaction turbine, for any other reaction stages of course, the velocity triangles need not necessarily be symmetrical, and this is also assuming that the axial velocity is does not change across the rotor and the nozzle. Now, for this kind of a reaction turbine which is having a degree of reaction of 0.5; since the velocity of triangles are mirror images are symmetrically. If we assume constant axial velocity, we have C w3 is equal to minus C a tan alpha 2 minus U. And therefore, the turbine work ratio would basically be equal to twice into twice of C a by U tan alpha 2 minus 1. This we can compare with that of the impulse turbine where it was 2 U multiplied by C a by U tan alpha 2 minus 1. So, you can immediately see that there is fundamental difference between the work ratio as compared to a turbine which is impulse or in this case of course example was for a 50 percent reaction of turbine. So, there is a fundamental difference between the work ratio as applicable for an impulse turbine as compared to that of a 50 percent reaction turbine, and in general for any reaction turbine as well. Now, this was as per as the different types of turbine configurations were concerned and how one can analyze these turbine configurations. And what are the fundamental differences between let see an impulse turbine and a reaction turbine, and how one can from the velocity triangle estimate the work ratio that or the work done by these kind of turbine stages. So, what I was suggesting right of the beginning was that you can clearly see differences between the compressor and turbines by looking at the velocity triangle for these two different cases, and comparing them to understand the fundamental working of compressors and turbines and what makes them two different components. What we can take up next for discussion is something we have discussed in detail for compressors as well; that is to do with a cascade. And as you have already seen a cascade is a simplified version of rotating machine, and you could have different versions of cascade, you could have a linear cascade or an annular cascade. And basically a cascade would have a set of blades which are arranged; set of similar blades which are all arranged in certain fashion, and at a certain angle which we have referred to as the straggler angle. And cascade analysis forms a very fundamental analysis of design of turbo machines whether it is compressors or turbines. So, cascade basically consists of an array of stationary blades. And constructed basically from measurement of performance parameters, and what is usually done is that we would like to eliminate any three-dimensional effects which are likely to come up in a cascade. And one of the sources of three dimensionality is the presence of boundary layer .So, one would like to remove boundary layer from the end walls of the cascade, and so that is the standard practice one would have porous end walls through which boundary layer fluid can be removed; to ensure two dimensionality of the flow entering into a cascade. Now, it is also a standard assumption that radial variations in velocity field can be kind of eliminated or ignored. And cascade analysis is primarily meant to give us some idea about the amount of blade loading that a particular configuration can give us, as well as the losses in total pressure that one can measure from a cascade analysis. So, and in turbine cascades testing also involves wind tunnels which are very similar to what we have discussed for compressors. I had shown you cascade wind tunnels, when we are discussing about a cascades in the context of compressors. In turbine cascades are also tested in similar wind tunnels. And just that in a case of turbines, since they are operating in an accelerating flow. There there is a requirement of a certain pressure drop across a turbine. So therefore, the wind tunnel is required to generate sufficient pressure which can be expanded through a turbine cascade. Now, turbine blades has are probably aware would are likely to have much higher camber, than compressor cascades or compressor blades. And turbine cascades are set at a negative stagger unlike in compressor blades; something I will explain when we take up a cascade, schematic in in detail. Now, cascade analysis will basically give us as I mentioned two parameters besides the sets of other parameters, like boundary line thickness and all and losses, etcetera. The most fundamental parameter we would like to look at from the cascade analysis is this surface static pressure distribution or CP distribution, which is related to the loading of the blade, and the second aspect of the is the total pressure loss across the cascade, which is yet another parameter that one would like to infer from the cascade analysis. Now, let us take a look at a typical cascade, turbine cascade nomenclature. I think I mentioned in the beginning that all the terms that we have used for compressors will it is the same nomenclature that we apply for a turbine as well. Just that that the way the blades are set, so the blade geometry they are quiet different between compressors and turbines. So, if we look at a typical compressor cascade, these are the blades you can immediately see that these blades have much higher turning for camber than compressor blades. So, it is a set of these blades which are arranged either linearly or in an annular fashion which constitute a cascade. So, these blades are set apart by a certain distance, which is as you can see denoted by pitch or spacing. And these blades are set at a certain angle, which is called the blade setting or the stagger angle. So, you can see this lambda which you see here refers to the blade setting or stagger angle. The blades have a certain camber which is basically the angle subtended between the tangent to the camber line at the leading edge and that at the trailing edge. So, the difference between that gives us the a blade camber. Now, the flow enters the cascade at a certain angle, you can see that inlet blade angle is given here as beta 1 and the blade outlet angle is beta 2. Now, so if there is a difference between the blade angle and the flow angle at the inlet; that basically the incidence which is denoted by i here .So, this is the incidence angle. Similarly, a difference between the blade outlet angle and the outflow angle is the deviation which is denoted by delta. So, at the exit you may have flow deviation and the inlet one may have an incidence. And if you draw a normal rect normal to the tangent at the trailing edge and take it to the next adjacent blade, the section surface of the adjacent blade. So, this distance that you see here is basically refer to as the throat or opening at the turbine exit, and that is here denoted by a symbol o. The blade called as you already know is denoted by C. And then the blades also would have a certain finite thickness at the trailing edge. So, that is denoted here by the trailing edge thickness. So, the blades practically will have a certain amount of finite thickness and that is what is denoted here as the thickness at the trailing edge. So, these are the fundamental nomenclatures nomenclatures that used in turbine very similar aspect was also used in compressor, where we had defined all these different parameters like incidence, deflection, deviation and blade angle, the camber, the pitch, stagger all of them defined. Difference is, of course, the way the blades are set, this is set at a negative stagger as you can see, the compressor cascade if you go back you will see that the way the blades are set is opposite to what you see in the case of a turbine. That is basically to ensure that the flow passage gives you the required amount of flow turning, and also the flow acceleration in the case of turbine cascades, and in in compressor cascade, the setting is to ensure that you get a defilation in a compressor. So, having understood the fundamental nomenclature of a turbine cascade; we would now take a closer look at the different aspects of flow through a cascade and I would be deriving well not really a detailed derivation. But I would just give you some idea about how one calculates the lift developed by a certain cascade turbine cascade. In two different cases, one is if you do not assume any losses or if it is inviscid analysis, and followed by an viscous analysis one of course would also get a drag in the case of viscous analysis, how one calculate the lift and of course that is basically related to loading of the blades eventually. So, the basic idea of cascade analysis is that just like in case of an airfoil, because cascade is in some sense in airfoil analysis, we can determine the lift and drag forces acting on the blades. And this analysis, as I mention can be carried out using both these assumptions potential flow or inviscid analysis or by considering viscous effect in a rather simplistic manner. So, we will assume that the mean velocity which we going to denote as V subscript m, makes an angle of alpha subscript m with axial the direction. What we will do is to determine the circulation developed on the blades, and subsequently the lift force. In the inviscid analysis obviously there is no drag and there is only a lift force, which lift is only force acting on the blade. In the case of an inviscid analysis, when you take up a viscous analysis there are two components of a force and resultant force, they will lift and they drag. So, this is the geometry, we are considering for an inviscid flow through turbine cascade. If, you take a look at two different stream lines let us say, this is one stream line and another stream line which is bounding one particular blade, that is shown here these are the two different stream lines. What we are going to do is to find the circulation reduced over this particular airfoil which is currently an airfoil here, and then relate that to lift developed on this particular blade. So, the inlet flow the entering the cascade is V 1 and the flow exceeding the blade is V 2, and of course, we will assume mean velocity of V m which makes an angle alpha m with the acceleration. So, if this is the case and this is how you can take a look at this circulation axis; so, this is the axis along which we are calculating the circulation, and therefore, this is the lift acting on this particular blade. Since it is a turbine blade, you know that this is basically the direction in which the lift is going to act. So, the mean velocity that is showed here by vector V m acts in this direction, this is inflow velocity V 1, and this is the exit velocity V 2. So, the circulation that is denoted by capital lambda here is equal to S multiplied by the difference in the tangential velocities V w2 minus V w1. And lift is related to the circulation which is product of density, times, the mean velocity and the circulation. Therefore, lift acting when there are no other effects considered like viscous affects, then the lift acting here would be simply the product of rho times V m into the circulation which S into V w2 minus V w1. So, this is expressed in a non-dimensional form which we referred to as the lift coefficient. So, see l here lift divided by half rho V m square into C, and this is equal to rho into V m into S V w2 minus V w1 by half rho V m square into C. So, this can be related to the angles, the across, the cascade, and so we can simplify this lift coefficient as 2 into S by C into tan alpha 2 minus tan alpha 1 multiplied by cos alpha alpha m. So, this is the this is basically lift coefficient on a turbine blade, assuming that flow is in this inviscid. Now, what happens if there are viscous effects? The primary effect of viscous flow on the flow through a turbine cascade is the fact that viscous effects manifest themselves in the form of pressure losses - total pressure losses. And therefore, the wake from the blade trailing edge will lead to a non uniform velocity leaving the blades. In the previous analysis, we were assuming uniform velocity entering the blades and uniform velocity leaving the blades, because it is a potential flow. So, here in the case of viscous analysis in addition to lift, one would also have a drag which we will also contribute to left in some where the other. So, the effecting force acting on the blade will be resultant of both the left as well as the drag acting on the blade. So, we now defined what is known as total pressure loss coefficient where defined a similar parameter for compressors as well. So, this is denoted by omega bar, because there is a total pressure loss taking place across the blades as a result of the viscous effects. So, omega bar is equal to P 01 minus P 02 divided by half rho V 2 square. This is the losing total pressure across the turbine cascade. So, the schematic ahead shown earlier now gets modified, because you have a set of uniform stream lines entering turbine cascade, but as they leave the cascade you can see that they have became non uniform, basically at the trailing edge where there is a wake. So this, what is shown here schematically is the these are the different wakes of all these blades that are present here. So, there is difference in the forces acting on the blade as a result of this non uniformity in the velocity at the at the exit of the turbine cascade. So, in this case, we can calculate drag as equal to the losses, we can relate the drag to the losses total pressure losses, omega bar into S into cos alpha m. And therefore, the effective lift will now be equal to the sum of the lift as well as the component of drag in that effective direction that is omega bar into S into cos alpha m. And lift we know is the product of density and the mean velocity and the circulation. So, that is rho V m into delta plus omega bar S cos alpha one alpha m. Therefore, the lift coefficient in this case will get modified as twice into S by C tan alpha 2 minus tan alpha 1 cos alpha m plus the drag components C D times tan alpha m. So, this is the manner in which we can calculate lift coefficient for both this cases; one is for case the without viscous effects and the second is if we consider the viscous effects. So, the basic idea for calculating these coefficients was to calculate, also calculate the blade efficiency. So, based on the calculation of the lift and drag coefficient, we can now calculate the blade efficiency, which is basically the ratio of ideal static pressure drop to obtain a certain degree of kinetic energy change to the actual static pressure drop which will produce the same change in kinetic energy. Therefore, the blade efficiency is in have of course skip the derivation of the blade efficiency. But it can be related to the lift and drag coefficient like blade efficiency is 1 minus C D by C L tan alpha m divided by 1 plus C D by C L cot alpha m. And if you want to neglect the drag term in the lift definition, because C D - the drag term is usually much smaller in comparison to the lift. The blade efficiency is simply 1 by 1 plus 2 into C D divided by C L sin into twice alpha m. So, this basic idea of calculating the lift drag and coefficient was also to calculate the blade efficiency, which is basically a function of C D C L have the mean angle alpha m. So, let me now quickly recap our discussion in today’s class. We had taken up three distinct topics for discussion; one was the different types of turbine, configuration the axial turbine configuration, the impulse and the reaction turbine stages and we have done. We had a look at the velocity triangles and how we can calculate the work ratio for impulse and reaction turbine stages. We have also carried out the work and stage dynamic, we looked at these different components or configurations of axial turbines, and how we can go about determining the work ratio of these two configurations of axial turbine. And then we had some discussion on turbine cascades, and calculation of lift and drag for a typical turbine configuration, and how we can use that information to calculate the blade efficiency from simple turbine cascade testing. That is simple cascade testing can actually give us some idea about the blade efficiency that this kind of a blade configuration can give us. So, that bring us to end of this lecture. We will continue discussion on axial turbines in the next lecture as well, where we will primarily talking about the performance parameters, degree of reaction losses as well as efficiency of axial turbines. And were we also take up detailed discussion on whatever the different losses in a two-dimensional sense. And how we can define the efficiency and you will see that different ways of defining efficiency for a turbine. So, we will take up some of these topics for discussion in the next class. . ", "transcription_base": " Hello and welcome to lecture number 20 of this lecture series on turbo machinery aerodynamics. We have probably half way through this course and I guess you must have had some good idea about what is involved in turbo machinery analysis and what is involved in design of different types of turbo machines especially the compressors. Now, starting the last lecture onwards we are now looking at the axial turbines and of course, subsequently we will also be talking about the radial turbines and so on. So, I think in the last class you must have had got some introduction to what axial turbines are and what constitutes axial turbines and so on. So, let us take that discussion little bit further in today's class, we will be talking about two dimensional analysis of axial compressors well axial turbines. In a very similar fashion to what we had discussed for axial compressors. If you remember during one of the initial lectures, probably the lecture second lecture or the third lecture, we had been talking about axial compressors and how one can analyze axial compressors in a two dimensional sense. So, we will carry out a similar analysis and discussion in today's class about how the same thing can be carried out for turbines, axial turbines in particular. In today's class we are basically being going to talk about the following topics. We will initially discuss have some introduction to axial turbines, turbines in general. We will talk about impulse and reaction turbine stages which are the two basic types of axial turbines. We will then talk about the work and stage dynamics how you can calculate work done by a turbine and how is it different for impulse and reaction turbines. We will then spend some time on discussion about turbine blade cascade. We will assume that the nomenclature we had used in the case of compressors will still be valid, but of course, we will just highlight some simple differences between a compressor cascade and a turbine cascade, but the nomenclature remains the same in the sense that what we had called as camber or stagger or incidence all that remains the same for a turbine. So, I will probably spend lesser time discussing about those and take up some more tropics on cascade analysis, which we had not really covered in detail in compressors. Now, when we talk about turbines, you must have had some discussion of some introduction to the different types of turbines as we know there are different types of compressors like axial and centrifugal. Similarly, we have different types of turbines as well. Now, in a turbine just like in a compressor, we have different components. In a compressor, know that we have a rotor followed by a stator. In the case of turbines, we have a nozzle or a stator which precedes a rotor. So, a nozzle or a stator guides and accelerates the flow into a rotor and of course, the work extraction takes place in the rotor and which is unlike in a compressor where it is the rotor which comes first and drives the flow and And then that goes into a stator which again turns it back to the axial direction and so on diffusion takes place in both the rotor and the stator. In a turbine as well you could have differential amounts of acceleration or pressure drop taking place in the rotor and the stator and there are certain types of turbines where the entire pressure drop takes place only in the stator. The rotor does not contribute to any pressure drop it simply deflects the flow. are called impulse turbines. We will discuss that in little more detail in some of these slilator slides. So, basically the flow in a turbine is accelerated in nozzle or a stator and it then passes through a rotor. In a rotor the working fluid basically imparts momentum to the rotor and basically that converts the kinetic energy to power output. Now, depending upon the power requirement this process obviously is repeated in multiple stages you would have number of stages which will generate the required work output which is also similar to what you have in a compressor where you might have multiple stages which basically are meant to give you the required pressure rise in a typical axial compressor. Now we have seen this aspect in compressors as well that due to the motion of the rotor blades you have basically two distinct components or types of velocities one is the absolute component or type of velocity and the other is a relative component or relative velocity. This was also discussed in detail in compressors and so you will in a turbine analysis that we will do in by analyzing the velocity triangle you will see that there are these two distinct components which will become obvious when we take up velocity triangles and this very similar to what you had discussed in compressor. So, if you have understood velocity triangle construction for an axial compressor, it is pretty much the same in the case of a turbine as well. So, that will that is probably the reason why it will make it simpler for you to understand the construction of a velocity triangle. Now, the fundamental difference between compressor and the turbine is the fact that a compressor is required to generate a certain pressure rise. There is a work input into the compressor which is what is used in increasing the pressure across the compressor. Compressor operates in an adverse pressure gradient mode that is the flow always sees an increasing pressure downstream. In the case of a turbine, it is not that case, it is the other way around that the flow always sees a favorable pressure gradient because there is a pressure drop taking place in a turbine which leads to which is how the turbine extracts work from the flow. That is it converts part of the kinetic energy which the flow has into work output. And therefore, in a turbine the flow always sees a favorable pressure gradient and that is one fundamental difference between a turbine and a compressor. Now, because you have a favorable pressure gradient, the problems that we have seen in the case of compressor like flow separation and blade stall and surge and all that does not really affect a turbine because a turbine the flow is always in an accelerating mode. And so, the problem of flow separation does not really limit the performance of a turbine. So, it is possible that we can extract lot more work per stage in a turbine as compared to that of a compressor. And therefore, you would if you have noticed a schematic of a typical modern day jet engine, you will find that there are numerous stages of compressors may be 15 or 20 which are actually driven by may be 2 or 3 stages of turbine. So, each stage of a turbine can actually give you much greater pressure drop than what we can achieve or the kind of pressure rise we can achieve in one stage of a compressor which is why a single stage of a turbine can drive multiple stages of compressor. So, that is a very important aspect that you need to understand because The fundamental reason for this being the fact that turbines operate in a favorable pressure gradient compressors operate in an adverse pressure gradient. So, there are limitations in a compressor which will prevent us from having very high values of pressure rise per stage that is not a limitation in a turbine and that is why you have much greater pressure drop taking place in a turbine as compared to that of the pressure rise that you get from one stage of a compressor. So turbines like compressors can be of different types. Compressors we have seen can be either axial or centrifugal. In the case of turbines you can in fact in the some literature which also says they could also have mixed type of compressors axial and centrifugal mixed. Similar thing is also there in the case of turbine. You could have an axial turbine or a radial turbine or a mixture or combination of the two called mixed flow turbines. Accel turbines obviously can handle large mass flows and obviously are more efficient as very similar analogy we can take from compressors which have larger mass flow and are obviously more efficient. Accel turbine main advantage is that it has the same frontal area of that of a compressor and also it is possible that we can use an axial turbine with that of a centrifugal compressor. So, that is also an advantage and what is also seen is that efficiency of turbines are usually higher than that of compressors. The basic reason again is related to the comment I made earlier that turbines operate in a favorable pressure gradient and so the problems that flow sees in an adverse pressure gradient is not seen. There are no problems of flow separation except in some rare cases and this also means that theoretically turbines are easier to design. Well easier is in quote and uncourt, well in the sense that you know compressors require little more care in terms of aerodynamic design. But of course turbines have a different problem because of high temperatures and so turbine blade cooling and associated problems that is an entirely different problem altogether. So aerodynamically if you have to design a compressor or a turbine turbines would be a tad easier to design than compressors just because of the fact that you do not have to really worry about the chances of flow separation across the turbine because it is always an accelerating flow. In the case of compressors that is not the case and there is always a risk that a compressor might enter into stall. So let us now take a look at now that I have spoken a lot about types of turbines and their functions and so on. Let us take a look at a typical axial turbine stage. So what shown here is a simple schematic of an axial turbine stage. So, an axial turbine stage consists of as I mentioned a nozzle or a stator followed by a rotor. So, this is just representing a nozzle through which hot gases from the combustion chamber are expanded and then that passes through a rotor which is what gives us the power output. Rotor is mounted on what is known as a disc and of course, the flow from the rotor is exhausted into either a next stage or through the component downstream which could be a nozzle in the case of an aircraft engine. So, usually we would be denoting the stator inlet as station one stator exit as station two and rotor exit as station three. In some of the earlier generation turbines, the disc was a separate entity, rotor was mounted on slots which were provided on the disc. And so, those separate mechanism for mounting rotor blades on the disc. Some of the modern day, so it was very soon realized that having a separate disc and different blades, obviously will increase the number of parts. So the part count will increase tremendously. So but with modern day manufacturing capabilities in terms of 5 axis and 7 axis numerical machines called CNC machines, computer guided machines, it is possible for us to make them out of a single piece. And this is done in smaller size engines now and some of the companies have their own names for that for example, G calls such a disk which is combination of disk and the blade as blisk. Blisk means blade and disk together machined out of a single piece of metal. And similarly, their competitors also have their own terminologies like Pratt and Whitney calls it integrated blade rotor or IBR. There is no distinct root fixture for a blade because the blade and the disc are a single component. The main advantage being that you have reduced significantly the number of parts. Whereas you would have let us say a typical turbine blade may have something like 70 to 80 blades or even more of course mounted on a disc. So, that is like 80 to 90 parts for one stage of a rotor. Now, if you have a That blisk you have just one component because all the blades have been mounted on one disc. That is a tremendous advantage for in terms of maintenance aspect, but at the same time the primary disadvantage is the fact that if there is one blade which gets damaged, in the earlier scenario you just have to replace the blade. Here it becomes impossible to replace the blade and so then of course you will have to do are rebalancing of the disc and if the damage is severe then the whole disc has to be replaced. Of course, there are pros and cons of having an integrated blade rotor concept and of course, there are lot of disadvantages and advantages. But that for at least smaller engines economically that is in the long run that seems to be an advantage that you have a combination of the blade and the disk. So, having understood some of the fundamentals of turbines, let us move on to the more important aspect of analysis, the two dimensional analysis that is to do with velocity triangles. I think we spent quite some time discussing velocity triangles for compressors. So, I will assume that you have understood the fundamentals of velocity triangles and try to kind of move on to constructing the velocity triangles just like that unlike in compressors where I had done it step by step. The process is exactly the same as what you had done for a compressor, but of course it being a turbine there are subtle differences which you need to understand. Now, velocity triangle analysis is an elementary analysis and this is elementary to axial turbines as well just like in the case of compressors. Now the usual procedure for analysis is to carry out this analysis at the mean blade height and we will have a blade speed at that height assuming to be u capital U absolute component of velocity we will denote by C and relative component we will denote by V and the axial velocity the absolute component of that is obviously denoted by C subscript just like in compressors tangential components will be denoted by a subscript w. So, c w is absolute component of tangential velocity v w is the relative component in the tangential direction. And regarding angles alpha will denote the angle between the absolute velocity and the axial direction and beta denotes the corresponding angle for relative velocity. So, these are the terminologies nomenclature that we have used when in a compressor we will follow exactly the same nomenclature in the case of turbine as well. So, let us move directly to a velocity triangle of a typical turbine stage. So, turbine stage as we have already seen consists of a rotor, well a stator or a nozzle, it is usually referred to as a nozzle in the case of turbine because of flow is accelerated in a stator of a turbine and that is why it is called a nozzle. And then you have a rotor which follows a stator or a nozzle. Now, inlet to stator is denoted as station 1 exit is noted as station 2 exit of the rotor is station 3. So, let us say there is an inlet velocity which is given by C 1 which is the absolute velocity entering at an angle alpha 1. It exits the stator or nozzle with a highly accelerated flow which is C 2 you can see that C 2 is much higher than C 1 and that is exactly the reason why this is called a nozzle. Now, at the rotor entry we also have a blade speed u, please note that the direction of this vector u is from the pressure surface to the section surface, unlike in compressor where it was the other way round. Here the flow drives the blades and that is why you have the blade speed which is in this direction. This is the affiliate velocity entering the rotor and relative velocity will be the vector sum of these two or vector difference between these two and that is given by V 2. Alpha 2 is the angle which C 2 makes with the axial direction beta 2 is the angle which V 2 makes with the axial direction and just like we have seen in compressors V 2 enters the rotor at an angle which is tangential to the camber at the leading edge. This is to ensure that the flow, this is obviously when the incidence is close to 0 to ensure that the flow does not separate. At the rotor exit we have V 3, V 3 is less than V 2 as you can see and of course that also depends upon the type of the turbine whether it is impulse or reaction and you also have C 3 here and this is the blade speed u beta 3 is the angle which P 3 makes with the axial direction alpha 3 is the angle which the absolute velocity C 3 makes with the axial direction. So, if you now come go back to the earlier slide of lecture 2 or 3 where we had discussed about velocity triangles for an actual compressor, you can quite easily see the similarities as well as the differences. So, I would strongly urge you to compare both these velocity triangles by keeping them side by side. So, you can understand the differences between a compressor and a turbine. At the same time, you can also try to figure out some similarities between these two components. And so, it is very necessary that you understand clearly both the differences as well as the similarities from a very fundamental aspect that is the velocity triangle point of view. So, this is a standard velocity triangle for a typical turbine stage I have not really mentioned here what kind of a turbine it is whether it is impulse or reaction we will come to that classification very soon and you will see that there are different ways in which you can express the velocity triangle for both of these types of turbines. So let us now try to take a look at the different types of turbines. I mentioned in the beginning that there are two different configurations of axial turbines that are possible, the impulse and the reaction turbine. In an impulse turbine the entire pressure drop takes place in the nozzle and the rotor blades would simply deflect the flow and would have a symmetrical shape. So, there is no acceleration or pressure drop taking place in the rotor in an impulse turbine. So, the rotor blades would simply deflect the flow and guide it to the next nozzle if there is one present. In a reaction turbine on the other hand the pressure drop is shared by the rotor as well as the stator and the amount of pressure drop that is shared is defined by the degree of reaction which we will discuss in detail in the next lecture. Now, which means that the degree of reaction of an impulse turbine would be 0 because the entire pressure drop has already taken place in the stator, the rotor does not contribute to any pressure drop and so the degree of reaction for an impulse turbine should be 0. So, these are two different configurations of axial turbines which are possible and what we will do is that we will take a look at their velocity triangles also, but before that we need to also understand the basic mechanism by which work is done by a turbine. Now, if you were to apply angular momentum equation for an axial turbine, what you will notice is that the power generated by a turbine is a function of well 3 parameters, one is of course, a mass flow rate. The other parameters are the blade speed and the tangential component of velocity, the absolute velocity. So, if you apply angular momentum at the inlet and exit of the rotor, then the power generated by the turbine is equal to mass flow rate multiplied by u 2 into C w 2, which is the product of the blade speed and the tangential velocity absolute at the inlet of the rotor minus u 3 times C w 3, which is again blade speed at rotor exit and multiplied by the tangential component of the absolute velocity at the rotor exit. Now, we would normally assume that the blade speed does not change from at a given radial plane and therefore, u 2 can be assumed to be equal to u 3 and therefore, worked on per unit mass would now be equal to plate speed that is u multiplied by Cw2 minus Cw3 or which is also equal to the from the thermodynamics point of view there is a stagnation pressure, stagnation temperature drop taking place in a turbine because the turbine expands the flow and work is extracted from the turbine and therefore there has to be a stagnation temperature drop taking place in a turbine. Therefore, the enthalpy difference between the inlet and exit of the turbine would basically be equal to the work done by or work developed by this particular turbine. So, work done per unit mass is also equal to C p times T 0 1 minus T 0 3, where this is basically the enthalpy difference C p T01 is enthalpy at inlet of the turbine, Cp T03 is the enthalpy at the exit of the turbine. Let us now denote delta T0 which basically refers to the stagnation temperature, the change in stagnation temperature in the turbine delta T0 is equal to T01 minus T03 which is also equal to T 0 2 minus T 0 3 because 1 to 2 is the stator and there cannot be any change in stagnation temperature in the stator. Therefore, T 0 1 minus T 0 3 is equal to T 0 2 minus T 0 3. So, we now define what is known as the stage work ratio which is basically delta A naught by T01 and that is equal to U times Cw2 minus Cw3 divided by Cp times T01. So, this basically follows from these two equations here which correspond to the work done per unit mass one is in terms of the velocities and the other is in terms of stagnation temperatures. So, a similar analysis was also carried out when we were discussing about axial compressors and where also we had kind of equated the work that the flow does on well work done by the compressor on the flow as compared to the stagnation temperature rise taking place in a compressor as a result of the work done on the flow. So, there also we had defined the pressure rise or pressure ratio per stage in terms of the temperature rise across that particular stage and the velocity components which come from the velocity triangles. Now, what you can see here is that the turbine work per stage would basically be limited by two parameters. One is the pressure ratio that is available for expansion and of course, the other aspect is the amount of blade stress and turning that is physically possible for one to achieve in the case of a particular turbine. So there are two parameters, one being the available pressure ratio and the other is allowable blade stress and turning that one can achieve in a particular turbine configuration. So in unlike in a compressor where we also had the issue of boundary layer behavior because the flow was always operating in an adverse pressure gradient mode in compressors. In a turbine the pressure gradient is favorable. So, boundary layer behavior is generally something that can be controlled and there are normally not much issues related to boundary layer, boundary layer separation or growth of boundary layer and so on. Of course, there are certain operating conditions under which certain the stages of turbines may undergo local flow separation, but that is only for short durations in general in a favorable pressure gradient boundary layers generally tend to be well behaved. Now, the turbine work ratio that we had seen in the previous slide is also often defined often defined in as a ratio between the work done per unit mass divided by the square of the blade speed. Therefore, W t by U square which is also equal to the enthalpy rise or rather enthalpy drop in the case of turbine divided by U square which is basically equal to delta C w divided by U or net change in the tangential velocity absolute divided by the blade speed. Now, this is an important parameter because based on this we can understand the differences between an impulse turbine and a reaction turbine which is what we are going to do next. To take a look at what are the fundamental differences besides of course, the fact that an impulse turbine flow is the entire pressure drop takes place only in nozzle and then reaction turbine that is shared between the nozzle and the rotor. Let us take up an impulse turbine first and we will take a look at the velocity triangles for an impulse turbine and then try to find out the work ratio per stage of an impulse turbine and relate it to some parameters which we can get from the velocity triangles. So here we have a typical impulse turbine stage, a set of a row of nozzle blades followed by a row of rotor blades. And so flow is accelerated in the nozzle and so the velocity that reaches the rotor, the absolute component is C2 and at an angle of alpha 2 with the axial direction and as result of the blade speed u, the relative velocity which enters the rotor is V 2 which is at an angle of beta 2 with the axial direction. And in an impulse turbine I mentioned that the rotor simply deflects the flow and there is no pressure drop taking place in the rotor. And therefore, at the exit of the rotor we have V 3 which is at an angle of beta 3 and by virtue of the symmetry of the blades we will have beta 2 is equal to minus beta 3 and velocity in magnitude V 2 would be equal to V 3. So, which we can also see from the velocity triangle shown here, C 2 is the absolute velocity entering the rotor, V 2 is the relative velocity and the corresponding angles here alpha 2 and beta 2. Now, in the rotor we have V 3 which is equal to V 2 in magnitude, but at an angle which is different from the inlet that is beta 3 will be negative of beta 2 in the other direction. Absolute velocity leaving the blade is C 3. Now, if you look at the other components of velocities like this is the axial component of the absolute velocity C A and the corresponding tangential components of the relative velocity which are obviously equal and opposite direction like V W 2 and V W 3 you can see that these are equal in magnitude but of course, the directions are opposite because V 2 and V 3 are opposite directions and C W 2 is the absolute velocity component of the well tangential component of the absolute velocity at inlet C w 3 that at the exit of the rotor. So, this is typical velocity triangle of an impulse turbine stage and if you take a closer look at the velocity triangles I have mentioned that the angles beta 3 and beta 2 are equal in magnitude, but they are different by the their orientation. So, beta 3 is equal to minus beta 2, which means that we have V w 3 is equal to minus V w 2. And the difference in the tangential component of the absolute velocity Cw 2 minus Cw 3 will be equal to twice of V w 2. So, let us take a look at the velocity wrangle again Cw2 is this minus Cw3 is equal to the sum of Vw2 and Vw3 and since they are equal we have that is equal to twice of Vw2 which is also equal to 2 into Cw2 minus U or this is equal to 2 U into CA by U tan alpha 2 minus 1. So, that is again coming from the velocity triangles you can see that CA tan alpha 2 is this component minus u is equal to twice of this. So, the difference between the tangential component of the absolute velocity Cw2 and Cw3 that is delta Cw for an impulse turbine will be equal to 2 u into CA by your tan alpha 2 minus 1. Therefore, the work ratio that we have defined earlier for an impulse turbine that is delta H naught by u square is equal to 2 u into Ca by u tan alpha 2 minus 1. We will now take a look at what happens in the case of a reaction turbine and calculate the work ratio as applicable for a reaction turbine and see is there a difference fundamentally in the work ratio of an impulse turbine and a reaction turbine. Now, let us take a look at a typical 50 percent reaction turbine just for simplicity. The reason why we took up a 50 percent reaction turbine is because in a 50 percent reaction turbine the pressure drop is shared equally between the nozzle and the rotor. And therefore, the velocity triangles as you can see are mirror images of one another. velocity triangle at the inlet of the rotor is this, where C 2 is the absolute velocity coming in from the nozzle, V 2 is the relative velocity and this is the blade speed. And since the mirror images, the exit of the rotor you have V 3 and C 3 and therefore, you can clearly see that C 2 will be equal to V 3 and V 2 will be equal to C 3 corresponding the angles alpha 2 will be equal to beta 3 and beta 2 will be equal to alpha 3. So, this is true only for a 50 percent reaction turbine. For any other reaction stages of course, the velocity triangles need not necessarily be symmetrical and this is also assuming that the axial velocity is does not change across the rotor and the nozzle. Now, for this kind of a reaction turbine which is having a degree of reaction of 0.5, since the velocity triangles are mirror images or symmetrical, if you assume constant axial velocity, we have C w 3 is equal to minus C a tan alpha 2 minus u and therefore, the turbine work ratio would basically be equal to twice into twice of C a by u tan alpha 2 minus 1. This we can compare with that of the impulse turbine where it was 2 u multiplied by c a by u tan alpha 2 minus 1. So, you can immediately see that there is fundamental difference between the work ratio as compared to a turbine which is impulse or in this case of course, example was for a 50 percent reaction turbine. So, there is a fundamental difference between the work ratio as applicable for an impulse turbine as compared to that of a 50 percent reaction turbine and in general for any reaction turbine as well. Now, this was as far as the different types of turbine configurations were concerned and how one can analyze these turbine configurations and what are the fundamental differences between let us say an impulse turbine and the reaction turbine and how one can from the velocity triangle estimate the work ratio that or the work done by these kind of turbine stages. So what I was suggesting at the beginning was that you can clearly see differences between the compressors and turbines by looking at the velocity triangle for these two different cases and comparing them to understand the fundamental working of compressors and turbines and what makes them two different components. What we can take up next for discussion is something we have discussed in detail for compressors as well that is to do with a cascade and as you have already seen a cascade is a simplified version of rotating machine and you could have different versions of cascade you could have a linear cascade or an annular cascade. And basically a cascade would have a set of blades which are arranged set of similar blades which are all arranged in a certain fashion at a certain angle which we have referred to as the stagger angle. And cascade analysis forms a very fundamental analysis of design of turbo machines, whether it is compressors or turbines. So, cascade basically consists of an array of stationary blades and constructed basically for measurement of performance parameters. And what is usually done is that we would like to eliminate any three dimensional effects which are likely to come up in a cascade and one of the sources of three dimensionality is the presence of boundary layer. So, one would like to remove boundary layer from the end walls of a cascade and so that is a standard practice that one would have porous end walls through which boundary layer flow flow it can be removed to ensure two dimensionality of the flow entering into a cascade. Now, it is also a standard assumption that radial variations in velocity field can be kind of eliminated or ignored. And cascade analysis is primarily meant to give us some idea about the amount of blade loading that a particular configuration can give us as well as the losses in total pressure that one can measure from a cascade analysis. So, and in turbine cascades testing also involves wind tunnels which are very similar to what we had discussed for compressors. I had shown you cascades wind tunnels when we were discussing about cascades in the context of compressors. In turbine cascades are also tested in similar wind tunnels and just that in a case of turbines since they are operating an accelerating flow. There is a requirement of a certain pressure drop across a turbine. So, therefore, the wind tunnel is required to generate sufficient pressure which can be expanded through a turbine cascade. Now, turbine blades as you are probably aware would are likely to have much higher camber than compressor cascades or compressor blades. And, turbine cascades are set at a negative stagger like in compressor blades, something I will explain when we take up a cascade schematic in detail. Now, cascade analysis will basically give us as I mentioned two parameters besides a set of other parameters like boundary air thickness and losses etcetera. most fundamental parameter we would like to look at from the cascade analysis is this surface static pressure distribution or CP distribution which is related to the loading of the blade and the second aspect to this is a total pressure loss across the cascade which is yet another parameter that one would like to infer from the cascade analysis. Now let us take a look at a typical cascade turbine cascade nomenclature. I think I mentioned at the beginning that all the terms that we have used for compressors will it is the same nomenclature that we apply for a turbine as well. Just that the way the blades are set or the blade geometry they are quite different between compressors and turbines. So, if you look at a typical compressor cascade these are the blades you can immediately see that these blades have much higher turning or camber than compressor blades. So, it is a set of these blades which are arranged either linearly or in another fashion which constitute a cascade. So, these blades are set apart by a certain distance which is as you can see denoted by pitch or spacing and these blades are set at a certain angle which is called the blade setting of this stagger angle. So, you can see this lambda which you see here refers to the blade setting or stagger angle. The blades have a certain camber which is basically the angle subtended between the tangent to the camber line at the leading edge and that at the tail trailing edge. So, the difference between that gives us the blade the chamber. Now, the flow enters the cascade at a certain angle you can see that inlet blade angle is given here as beta 1 and the blade outlet angle is beta 2. Now, so if there is a difference between the blade angle and the flow angle at the inlet that is basically the incidence which is denoted by i here. So, this is the incidence angle similarly a difference between the blade outlet angle and the out flow angle is the deviation which is denoted by delta. So, at the exit you may have a flow deviation the inlet one may have an incidence. And if you draw a normal or at normal to the tangent at the trailing edge and take it to the next adjacent blade the suction surface of the adjacent blade. So this distance that you see here is basically referred to as a throat or opening at the turbine exit and that is here denoted by a symbol O. The blade cord as you already know is denoted by C and then the blades also would have a certain finite thickness at the trailing edge. So, that is denoted here by the trailing edge thickness. So, the blades practically will have a certain amount of finite thickness and that is what is denoted here as the thickness at the trailing edge. So, these are the fundamental nomenclatures, nomenclatures that is used in turbine, a very similar aspect was also used in compressor, where we had defined all these different parameters like incidence and deflection deviation and blade angle, the camber, pitch, stagger, all of them were defined. The differences of course, the way the blades are set, this is set at a negative stagger as you can see, the compressor cascade if you go back you will see that the way the blades are set is opposite to what you see in the case of a turbine. That is basically to ensure that the flow passage gives you the required amount of flow turning and also the flow acceleration in the case of turbine cascades and in compressor cascades the setting is to ensure that you get a deceleration in compressor. So, having understood the fundamental nomenclature of a turbine cascade, they would now take a closer look at the different aspects of flow through a cascade and I would be deriving well not really a detailed derivation, but I will just give you some idea about how one can calculate the lift developed by certain cascade turbine cascade in two different cases. One is if we do not assume any losses or if it is an inviscid analysis and followed by a viscous analysis one of course would also get a drag in the case of a viscous analysis how one can calculate the lift and of course that is basically related to the loading of the blades eventually. So, the basic idea of cascade analysis is that just like in the case of an airfoil because cascade is in some sense an airfoil analysis, we can determine the lift and drag forces acting on the blades. And this analysis as I mentioned can be carried out using both these assumptions a potential flow or inviscid analysis or by considering viscous effects in a rather simplistic manner. So, we will assume that the mean velocity which we are going to denote as V subscript M makes an angle of alpha subscript M with the axial direction. What we will do is to determine the circulation developed on the blades and subsequently the lift force. In the inviscid analysis, obviously there is no drag and there is only a lift force which lift is the only force acting on the blade in the case of an inviscid analysis. When you take up a viscous analysis there are two components of the force and the resultant force they will lift and they drag. So, this is the geometry we are considering for an inviscid flow through a turbine cascade. If you take a look at two different streamlines let us say this is one streamline and another streamline which is bounding one particular blade that is shown here. These are the two different stream lines. What we are going to do is to find the circulation produced over this particular airfoil which is currently an airfoil here and then relate that to the lift developed on this particular blade. So, the inlet flow the entering the cascade is V 1 and the flow exiting the blade is V 2 and of course, we will assume mean velocity of V m which makes an angle alpha m with the axial direction. So, if this is the case and this is how you can take a look at the circulation axis. So, this is the axis along which we are calculating the circulation and therefore, this is the lift acting on this particular blade. Since it is a turbine blade you know that this is basically the direction in which the lift is going to act. So, the mean velocity that is shown here by vector V m acts in this direction this is the inflow velocity v 1 and this is the exit velocity v 2. So, circulation that is denoted by capital lambda here is equal to s multiplied by the difference in the tangential velocities v w 2 minus v w 1 and lift is related to the circulation which is the product of density times the mean velocity and the circulation. Therefore, lift acting when there are no other effects considered like viscous effects, then the lift acting here would be simply the product of rho times v m into the circulation which is S into V w 2 minus V w 1. So, this is expressed in a non dimensional form which we refer to as the lift coefficient. So, C L here is lift divided by half rho V m square into C and this is equal to rho into V m into S V w 2 minus V w 1 by half rho V m square into C. So, this can be related to the angles the across the cascade and so we can simplify this lift coefficient as a lift coefficient as 2 into s by c into tan alpha 2 minus tan alpha 1 multiplied by cos alpha alpha m. So, this is the this is basically the lift coefficient on a turbine blade assuming that the flow is in this side. Now, what happens if there are viscous effects? The primary The effect of viscous flow on the flow through a turbine cascade is the fact that viscous effects will manifest themselves in the form of total pressure losses. And therefore, the wake from the blade trailing edge will lead to a non uniform velocity leaving the blades. In the previous analysis we were assuming uniform velocity entering the blades and uniform velocity leaving the blades because it is a potential flow. So, here in the case of viscous analysis in addition to lift, one would also have a drag which will also contribute to lift in some way of the other. So, the effective force acting on the blade will be a resultant of both the lift as well as the drag acting on the blade. So, we now define what is known as a total pressure law coefficient, we define a similar parameter for compressors as well. So, this is denoted by omega bar because there is a total pressure loss taking place across the blades as the result of the viscous effects. So, omega bar is equal to P01 minus P02 divided by half rho V2 square. So, this is the loss in total pressure across the turbine cascade. So, the schematic I had shown earlier now gets modified because you have a set of uniform stream lines entering the turbine cascade, but as they leave the cascade you can see that they have become non-uniform basically at the trailing edge where there is a wake. So, this what is shown here schematically is the these are the different wakes of all these blades that are present here. So, there is a difference in the forces acting on the blade as a result of this non-uniformity in the velocity at the exit of the turbine cascade. So, in this case we can calculate drag as equal to the losses we can relate the drag to the losses total pressure losses omega bar into S into cos alpha m and therefore, the effective lift will now be equal to the sum of the lift as well as the component of drag in that effective direction that is omega bar into s into cos alpha m and lift we know is the product of density and the mean velocity and the circulation. So, that is rho v m into delta plus omega bar s cos alpha 1 alpha m. Therefore, the lift coefficient in this case will get modified as twice into s by c tan alpha 2 minus tan alpha 1 cos alpha m plus the drag component cd times tan alpha m. So, this is the manner image we can calculate lift coefficient for both these cases one is for the case without viscous effects and the second is if we consider viscous effects. So, the basic idea of calculating these coefficients was to calculate also calculate the blade efficiency. So, based on the calculation of the lift and drag coefficients we can now calculate the blade efficiency which is basically the ratio of ideal static pressure drop to obtain a certain degree of kinetic energy change to the actual static pressure drop which will produce the same change in kinetic energy. Therefore, the blade efficiency is in of course, skip the derivation of the blade efficiency but it can be related to the lift and drag coefficients like blade efficiency is 1 minus cd by cl tan alpha m divided by 1 plus cd by cl cot alpha m. And if you were to neglect the drag term in the lift definition because cd the drag term is usually much smaller in comparison to the lift. The blade efficiency is simply 1 by 1 plus 2 into cd divided by CL sin into twice alpha m. So, the basic idea of calculating the lift and drag coefficients was also to calculate the blade efficiency which is basically a function of CD CL and the mean angle alpha m. So, let me now quickly recap our discussion in today's class. We had taken up three distinct topics for discussion one was the different types of turbine configuration axial turbine configurations the impulse and the reaction turbine stages and we have done we have had a look at the velocity triangles and how you can calculate the work ratio for impulse and reaction turbine stages. We also carried out the work and stage dynamics we have looked at these different components or configurations of axial turbines and how we can go about determining the work ratio for these two configurations of axial turbine. And then we had some discussion on turbine cascades and calculation of lift and drag for a typical turbine configuration and how we can use that information to calculate the blade efficiency from simple turbine cascade testing that is a simple cascade testing can actually give us some idea about the blade efficiency that this kind of a blade configuration can give us. So, that brings us to the end of this lecture. We will continue discussion on axial turbines in the next lecture as well, where we will primarily be talking about the performance parameters, degree of reaction losses as well as efficiency of axial turbines and where we will also take up detailed discussion on what are the different losses in a two dimensional sense and how we can define efficiency and you will see that there are different ways of defining efficiency for a turbine. So, we will take up some of these topics for discussion in the next class.", "transcription_medium": " Hello and welcome to lecture number 20 of this lecture series on turbo machinery aerodynamics. We have, we are probably half way through this course and I guess you must have had some good idea about what is involved in turbo machinery analysis and what is involved in design of different types of turbo machines, especially the compressors. Now starting the last lecture onwards, we are now looking at the axial turbines and of course, subsequently we will also be talking about the radial turbines and so on. So, I think in the last class you must have had got some introduction to what axial turbines are and what constitutes axial turbines and so on. So, let us take that discussion little bit further in today's class, where we will be talking about a two dimensional analysis of axial compressors, well axial turbines in a very similar fashion to what we had discussed for axial compressors. If you remember during one of the initial lectures, probably the lecture second lecture or the third lecture, we had been talking about axial compressors and how one can analyze axial compressors in a two dimensional sense. So, we will we will carry out a similar analysis and discussion in today's class about how the same thing can be carried out for turbines axial turbines in particular. In today's class, we basically been going to talk about the following topics. We will initially discuss have some introduction to axial turbines, turbines in general. We will talk about impulse and reaction turbine stages, which are the two basic types of axial turbines. We will then talk about the work and stage dynamics, how you can calculate work done by a turbine and how is it different for impulse and reaction turbines. We will then spend some time on discussion about turbine blade cascade. We will assume that the nomenclature we had used in the case of compressors will still be valid, but of course, we will just highlight some simple differences between a compressor cascade and a turbine cascade, but the nomenclature remains the same in the sense that what we had called as camber or stagger or incidence all that remains the same for a turbine. So, I will probably spend lesser time discussing about those and take up some more topics on cascade analysis, which we are not really covered in detail in compressors. Now, when we talk about turbines, you must have had this some discussion or some introduction to the different types of turbines. As we know, there are different types of compressors like axial and centrifugal. Similarly, we have different types of turbines as well. Now, in a turbine, just like in a compressor, we have different components. In a compressor we know that we have a rotor followed by a stator. In the case of turbines, we have a nozzle or a stator which precedes a rotor. So, a nozzle or a stator guides and accelerates the flow into a rotor and of course, the work extraction takes place in the rotor and which is unlike in a compressor where it is the rotor which comes first and drives the flow and then that goes into a stator which again turns it back to the axial direction and so on. Diffusion takes place in both the rotor and the stator. In a turbine as well you could have differential amounts of acceleration or pressure drop taking place in the rotor and the stator. And there are certain types of turbines where the entire pressure drop takes place only in the stator. This the rotor does not contribute to any pressure drop, it simply deflects the flow. These are called impulse turbines. We will discuss that in little more detail in some of these later slides. So, basically the flow in a turbine is accelerated in nozzle or a stator and it then passes through a rotor. In a rotor, the working fluid basically imparts momentum to the rotor and basically that converts the kinetic energy to power output. Now, depending upon the power requirement this process obviously is repeated in multiple stages you would have number of stages which will generate the required work output which is also similar to what you have in a compressor where you might have multiple stages which basically are meant to give you the required pressure rise in a typical axial compressor. Now, we have seen this aspect in compressors as well that due to the motion of the rotor blades you have basically two distinct components or types of velocities. One is the absolute component or type of velocity and the other is a relative component or relative velocity. This was also discussed in detail in compressors and so you will in a turbine analysis that we will do by analyzing the velocity triangle, you will see that there are these two distinct components which will become obvious when we take up velocity triangles and this very similar to what you had discussed in compressor. So, if you have understood velocity triangle construction for an axial compressor, it is pretty much the same in the case of a turbine as well. So, that is probably the reason why it will make it simpler for you to understand the construction of a velocity triangle. Now, the fundamental difference between a compressor and a turbine is the fact that a compressor is required to generate a certain pressure rise. There is a work input into the compressor, which is what is used in increasing the pressure across the compressor. Compressor operates in an adverse pressure gradient mode, that is the flow always sees an increasing pressure downstream. In the case of a turbine, it is not that case, it is the other way around that the flow always sees a favorable pressure gradient, because there is a pressure drop taking place in a turbine, which leads to a, which is how the turbine extracts work from the flow. That is, it converts part of the kinetic energy, which the flow has into work output. And therefore, in a turbine, the flow always sees a favorable pressure gradient and that is one fundamental difference between a turbine and a compressor. Now, because you have a favorable pressure gradient, the problems that we have seen in the case of compressor like flow separation and blade stall and surge and all that does not really affect a turbine, because a turbine the flow is always in a accelerating mode. And so, the problem of flow separation does not really limit the performance of a turbine. So, it is possible that we can extract lot more work per stage in a turbine as compared to that of a compressor. And therefore, you would if you have noticed a schematic of a typical modern day jet engine, you will find that there are numerous stages of compressors may be 15 or 20, which are actually driven by may be two or three stages of turbine. So, each stage of a turbine can actually give you much greater pressure drop than what we can achieve or the kind of pressure rise we can achieve in one stage of a compressor, which is why a single stage of a turbine can drive multiple stages of compressor. So, that is a very important aspect that you need to understand, because the fundamental reason for this being the fact that turbines operate in a favorable pressure gradient, compressors operate in an adverse pressure gradient. So, there are limitations in a compressor, which will prevent us from having very high values of pressure rise per stage, that is not a limitation in a turbine, and that is why you have much greater pressure drop taking place in a turbine as compared to that of the pressure rise that you get from one stage of a compressor. So, turbines like compressors can be of different types, compressors we have seen can be either axial or centrifugal. In the case of turbines, you can in fact in the some literature which also says they could also have mixed type of compressors axial and centrifugal mixed. Similar thing is also there in the case of turbine, you could have an axial turbine or a radial turbine or a mixture or combination of the two called mixed flow turbines. Axial turbines obviously can handle large mass flows, and obviously are more efficient as very similar analogy we can take from compressors, which have larger mass flow and are obviously more efficient. Axial turbine main advantage is that it has the same frontal area of that of a compressor and also it is possible that we can use an axial turbine with that of a centrifugal compressor. So, that is also an advantage and what is also seen is that efficiency of turbines are usually higher than that of compressors. The basic reason again is related to the comment I made earlier that turbines operate in a favorable pressure gradient. And so, the problems that flow sees in an adverse pressure gradient is not seen. There are no problems of flow separation except in some rare cases. And this also means that theoretically turbines are easier to design. Well, easier is in quote and unquote. Well, in the sense that you You know compressors require little more care in terms of aerodynamic design, but of course, turbines have a different problem because of high temperatures and so turbine blade cooling and associated problems that is an entirely different problem all together. So, aerodynamically if you have to design a compressor and a turbine, turbines would be a tad easier to design than compressors just because of the fact that you do not have to really worry about the chances of flow separation across the turbine, because it is always an accelerating flow. In the case of compressors, that is not the case and there is always a risk that a compressor might enter into stall. So, let us now take a look at, now that I have spoken a lot about types of turbines and their functions and so on. Let us take a look at a typical axial turbine stage. So, What is shown here is a simple schematic of an axial turbine stage. So, an axial turbine stage consists of as I mentioned a nozzle or a stator followed by a rotor. So, this is just representing a nozzle through which hot gases from the combustion chamber are expanded and then that passes through a rotor, which is what gives us the power output. Rotor is mounted on what is known as a a disc and of course, the flow from the rotor is exhausted into either a next stage or through the component downstream which could be a nozzle in the case of an aircraft engine. So, usually we would be denoting the stator inlet as station 1, stator exit as station 2 and rotor exit as station 3. In some of the earlier generation turbines, the disc was a separate entity, rotor was mounted on slots which were provided on the disc. And so, those separate mechanism for mounting rotor blades on the disc. Some of the modern day, so it was very soon realized that having a separate disc and different blades, obviously will increase the number of parts. So, the part count will increase tremendously. So, but with modern day manufacturing capabilities in terms of 5 axis and 7 axis numerical machines called CNC machines, computer guided machines, it is possible for us to make them out of a single piece. And this is done in smaller sized engines now, and some of the companies have their own names for that. For example, GE calls such a disc, which is combination of disc and the blade as blisk. Blisk means blade and disc together machined out of a single piece of metal. And similarly, their competitors also have their own terminologies like Pratt and Whitney calls it integrated blade rotor or IBR. There is no distinct root fixture for a blade, because the blade and the disc are a single component. The main advantage being that you have reduced significantly the number of parts, whereas you would have let us say typical turbine blade may have something like 70 to 80 blades or even more of course, mounted on a disc. So, that is like 80 to 90 parts for one stage of a rotor. Now, if you have a blisk, you have just one component, because all the blades have been mounted on one disc. That is a tremendous advantage for in terms of maintenance aspect, but at the same time the primary disadvantage is the fact that if there is one blade which gets damaged in the earlier scenario you just have to replace the blade. Here it becomes impossible to replace the blade and so then of course, you will have to do a rebalancing of the disc and if the damage is severe, then the whole disc has to be replaced. Of course, there are pros and cons of having an integrated blade rotor concept and of course, there are lot of disadvantages and advantages, but that for at least smaller engines economically that is in the long run that seems to be an advantage that you have a combination of the blade and the disc. So, having understood some of the fundamentals of turbines, let us move on to the more important aspect of analysis, the two dimensional analysis that is to do with velocity triangles. I think we spent quite some time discussing velocity triangles for compressors. So, I will assume that you have understood the fundamentals of velocity triangles and try to kind of move on to constructing the velocity triangles just like that unlike in compressors where I had done it step by step. The process is exactly the same as what you had done for a compressor, but of course, it being a turbine there are subtle differences which you need to understand. Now velocity triangle analysis is an elementary analysis and this is elementary to axial turbines as well just like in the case of compressors. Now the usual procedure for analysis is to carry out this analysis at the mean blade height and we will have blade speed at that height assuming to be U capital U absolute component of velocity we will denote by C and relative component we will denote by V. And the axial velocity the absolute component of that is obviously, denoted by C subscript A just like in compressors tangential components will be denoted by a subscript W. So, C W is absolute component of tangential velocity, V w is the relative component in the tangential direction. And regarding angles, alpha will denote the angle between the absolute velocity and the axial direction, and beta denotes the corresponding angle for relative velocity. So, these are the terminology's nomenclature that we have used even in a compressor, we will follow exactly the same nomenclature in the case of turbine as well. So, let us move directly to a velocity triangle of a typical turbine stage. So, turbine stage as we have already seen consists of a rotor well a stator or a nozzle. It is usually referred to as a nozzle in the case of turbine, because the flow is accelerated in a stator of a turbine and that is why it is called a nozzle. then you have a rotor which follows a stator or a nozzle. Now, inlet to stator is denoted as station 1, exit is denoted as station 2, exit of the rotor is station 3. So, let us say there is an inlet velocity which is given by C 1, which is absolute velocity entering at an angle alpha 1. It exits the stator or nozzle with a highly rated flow, which is C 2, you can see that C 2 is much higher than C 1 and that is exactly the reason why this is called a nozzle. Now, at the rotor entry, we also have a blade speed U. Please note that the direction of this vector U is from the pressure surface to the suction surface, unlike in compressor, where it was the other way around. Here, the flow drives the blades and that is why you have the blade speed which is in this direction. This is the absolute velocity entering the rotor and relative velocity will be the vector sum of these two or vector difference between these two and that is given by V 2. Alpha 2 is the angle which C 2 makes with the axial direction, beta 2 is the angle which V 2 makes with the axial direction. And just like we have seen in compressors, V 2 enters the rotor at an angle which is tangential to the camber at the leading edge. This is to ensure that the flow, this is obviously when the incidence is close to 0, to ensure that the flow does not separate. At the rotor exit, we have V 3. V 3 is less than V 2 as you can see and of course, that also depends upon the type of the turbine, whether it is impulse or reaction and you also have C 3 here and this is the blade speed U. Beta 3 is the angle which V 3 makes with the axial direction, alpha 3 is the angle which the absolute velocity C 3 makes with the axial direction. So, if you now come go back to the earlier slides of lecture 2 or 3, where we had discussed about velocity triangles for an axial compressor, you can quite easily see the similarities as well as the differences. So, I would strongly urge you to compare both these velocity triangles by keeping them side by side. So, you can understand the differences between a compressor and a turbine. At the same time, you can also try to figure out some similarities between these two components. And so, it is very necessary that you understand clearly both the differences as well as the similarities from a very fundamental aspect that is the velocity triangle point of view. So, this is a standard velocity triangle. For a typical turbine stage, I have not really mentioned here what kind of a turbine it is, whether it is impulse or reaction. We will come to that classification very soon and you will see that there are different ways in which you can express the velocity triangle for both of these types of turbines. So, let us now try to take a look at the different types of turbines. I mentioned in the beginning that there are two different configurations of axial turbines that are possible the impulse and the reaction turbine. In an impulse turbine the entire pressure drop takes place in the nozzle and the rotor blades would simply deflect the flow and would have a symmetrical shape. So, there is no acceleration or pressure drop taking place in the rotor in an impulse turbine. So, the rotor blades would simply deflect the flow and guided to the next nozzle if there is one present. In a reaction turbine on the other hand, the pressure drop is shared by the rotor as well as the stator and the amount of pressure drop that is shared is defined by the degree of reaction which we will discuss in detail in next lecture. Now, which means that the degree of reaction of an impulse turbine would be zero because the entire pressure drop has already taken place in the stator. The rotor does not contribute to any pressure drop and so the degree of reaction for an impulse turbine should be zero. So, these are two different configurations of axial turbines which are possible and what we will do is that we will take a look at their velocity triangles also, but before that we need to also understand the basic mechanism by which work is done by a turbine. Now, if you were to apply angular momentum equation for an axial turbine, what you will notice is that the power generated by a turbine is a function of well three parameters, is of course, a mass flow rate. The other parameters are the blade speed and the tangential component of velocity, the absolute velocity. So, if you apply angular momentum at the inlet and exit of the rotor, then the power generated by the turbine is equal to mass flow rate multiplied by U 2 into C w 2, which is the product of the blade speed and the tangential velocity absolute at the inlet of the rotor minus U 3 times C w 3 which is again blade speed at rotor exit and multiplied by the tangential component of the absolute velocity at the rotor exit. Now, we would normally assume that the blade speed is does not change from at a given radial plane and therefore, U 2 can be assumed to be equal to U 3. And therefore, the work done per unit mass would now be equal to plate speed that is U multiplied by C w 2 minus C w 3 or which is also equal to the from the thermodynamics point of view, there is a stagnation pressure, a stagnation temperature drop taking place in a turbine, because the turbine expands the flow and work is extracted from the turbine and therefore, there has to be a stagnation temperature drop taking place in a turbine. Therefore, the enthalpy difference between the inlet and exit of the turbine would basically be equal to the work done by or work developed by this particular turbine. So, work done per unit mass is also equal to C p times T 0 1 minus T 0 3, where this is basically the enthalpy difference. C p T 0 1 is enthalpy at inlet of the turbine, C p T 0 3 is the enthalpy at the exit of the turbine. Let us now denote delta T 0, which basically refers to the stagnation temperature. The net change in stagnation temperature in the turbine delta T naught is equal to T 0 1 minus T 0 3, which is also equal to T 0 2 minus T 0 3, because 1 to 2 is the stator and there cannot be any change in stagnation temperature in the stator. Therefore, T 0 1 minus T 0 3 is equal to T 0 2 minus T 0 3. So, we now define what is known as the stage work ratio, which is basically delta T naught by T 0 1 and that is equal to U times C w 2 minus C w 3 divided by C p times T 0 1. So, this basically follows from these two equations here, which correspond to the work done per unit mass, one is in terms of the velocities and the other is in terms of stagnation temperatures. So, a similar analysis was also carried out when we were discussing about axial compressors and where also we had kind of equated the work that the flow does on well work done by the compressor on the flow as compared to the stagnation temperature rise taking place in a compressor as a result of the work done on the flow. So, with there also we had defined the pressure rise or pressure ratio per stage in terms of the temperature rise across that particular stage and the velocity components which come from the velocity triangles. Now what you can see here is that the turbine work per stage would basically be limited by two parameters. One is the pressure ratio that is available for expansion and of course, the other aspect is the allowable the amount of blade stress and turning that is physically possible for one to achieve in the case of a particular turbine. So, there are two parameters, one being the available pressure ratio and the other is the allowable blade stress and turning that one can achieve in a particular turbine configuration. So, in unlike in a compressor where we also had the issue of boundary layer behavior, because the flow was always operating in an adverse pressure gradient mode in compressors in a turbine the pressure gradient is favorable. So, boundary layer behavior is generally something that can be controlled and there are normally not much issues related to boundary layer separation or growth of boundary layer and so on. Of course, there are certain operating conditions under which certain the stages of turbines may undergo local flow separation, but that is only for short durations. In general, in a favorable pressure gradient boundary layers generally tend to be well behaved. Now the turbine work ratio that we had seen in the previous slide is also often defined in as a ratio between the work done per unit mass divided by the square of the blade speed. Therefore, W t by U square which is also equal to the enthalpy rise or rather enthalpy drop in the case of turbine divided by U square, which is basically equal to delta C w divided by U or net change in the tangential velocity absolute divided by the blade speed. Now, this is an important parameter because based on this we can understand the differences between an impulse turbine and a reaction turbine, which is what we are going to do next. To take a look at what are the fundamental differences besides of course, the fact that an impulse turbine flow is the entire pressure drop takes place only in the nozzle and in reaction turbine that is shared between the nozzle and the rotor. Let us take up an impulse turbine first and we will take a look at the velocity triangles for an impulse turbine and then try to find out the work ratio per stage of an impulse turbine and relate it to some parameters which we can get from the velocity triangles. So, here we have a typical impulse turbine stage, a set of a row of nozzle blades followed by a row of rotor blades. And so, flow is accelerated in the nozzle and so, the velocity that reaches the rotor, the absolute component is C 2 and at an angle of alpha 2 with the axial direction and as a result of the blade speed u, the relative velocity which enters the rotor is V 2 which is at an angle of beta 2 with the axial direction. And in an impulse turbine, I mentioned that the rotor simply deflects the flow and there is no pressure drop taking place in the rotor. And therefore, at the exit of the rotor we have V 3 which is at an angle of beta 3 and by virtue of the symmetry of the blades we will have beta 2 is equal to minus beta 3 and velocity in magnitude V 2 would be equal to V 3. So, which we can also see from the velocity triangle shown here, C 2 is the absolute velocity entering the rotor, V 2 is the relative velocity and the corresponding angles here alpha 2 and beta 2. Now, in the rotor we have V 3 which is equal to V 2 in magnitude, but at an angle which is different from the inlet that is beta 3 will be negative of beta 2 in the other direction. Absolute velocity leaving the blade is C 3. Now, if you look at the other components of velocities like this is the axial component of the absolute velocity C A and the corresponding tangential components of the relative velocity which are obviously equal and opposite in direction like V w 2 and V w 3 you can see that these are equal in magnitude, but of course, the directions are opposite, because V 2 and V 3 are in opposite directions. And C w 2 is the absolute component of the well, tangential component of the absolute velocity at inlet C w 3 that at the exit of the rotor. So, this is a typical triangle of an impulse turbine stage and if you take a closer look at the velocity triangles, I had mentioned that the angles beta 3 and beta 2 are equal in magnitude, but they are different by their orientation. So, beta 3 is equal to minus beta 2, which means that we have V w 3 is equal to minus V w 2 And the difference in the tangential component of the absolute velocity C w 2 minus C w 3 will be equal to twice of V w 2. So, let us take a look at the velocity angle again. C w 2 is this minus C w 3 is equal to the sum of V w 2 and V w 3 and since they are equal we have that is equal to twice of V w 2, which is also equal to 2 into C w 2 minus u or this is equal to 2 u into C a by u tan alpha 2 minus 1. So, that is again coming from the velocity triangles you can see that C a tan alpha 2 is this component minus u is equal to twice of this. So, the difference between the tangential component of the absolute velocity C w 2 and C w 3 that is delta C w for an impulse turbine will be equal to 2 u into C a by u tan alpha 2 minus 1. Therefore, the work ratio that we have defined earlier for an impulse turbine that is delta H naught by u square is equal to 2 u into C a by u tan alpha 2 minus 1. We will now take a look at what happens in the case of a reaction turbine and calculate the work ratio as applicable for a reaction turbine and see the is there a difference fundamentally in the work ratio of an impulse turbine and a reaction turbine. Now, let us take a look at a typical 50 percent reaction turbine just for simplicity. The reason why we took up a 50 percent reaction turbine is because in a 50 percent reaction turbine, the pressure drop is shared equally between the nozzle and the rotor. And therefore, the velocity triangles as you can see are mirror images of one another. The velocity triangle at the inlet of the rotor is this, where C this is C 2 the absolute velocity coming in from the rotor from the nozzle, V 2 is the relative velocity and this is the blade speed. And since they are mirror images at the exit of the rotor you have V 3 and C 3 and therefore, you can clearly see that C 2 will be equal to V 3 and V 2 will be equal to C 3 corresponding the angles alpha 2 will be equal to beta 3 and beta 2 will be equal to alpha 3. So, this is true only for a 50 percent reaction turbine for any other reaction stages of course, the velocity triangles need not necessarily be symmetrical and this is also assuming that the axial velocity is does not change across the rotor and the nozzle. Now, for this kind of a reaction turbine which is having a degree of reaction of 0.5, since the velocity triangles are mirror images or symmetrical, if you assume constant axial velocity, we have C w 3 is equal to minus C a tan alpha 2 minus u. And therefore, the work ratio would basically be equal to twice into twice of C a by u tan alpha 2 minus 1. This we can compare with that of the impulse turbine, where it was 2 u multiplied by C a by u tan alpha 2 minus 1. So, you can immediately see that there is fundamental difference between the work ratio as compared to a turbine, which is impulse or in this case of course, example was for a 50 percent reaction turbine. So, there is a fundamental difference between the work ratio as applicable for an impulse turbine as compared to that of a 50 percent reaction turbine and in general for any reaction turbine as well. Now, this was as far as the different types of turbine configurations were concerned and how one can analyze these turbine configurations and what are the fundamental differences between let us say an impulse turbine and a reaction turbine and how one can from the velocity triangle estimate the work ratio that or the work done by these kind of turbine stages. So, what I was suggesting right at the beginning was that you can clearly see differences between the compressors and turbines by looking at the velocity triangle for these two different cases and comparing them to understand the fundamental working of compressors and turbines and what makes them two different components. What we can take up next for discussion is something we have discussed in detail for compressors as well, that is to do with a cascade. And as you have already seen, a cascade is a simplified version of rotating machine. And you could have different versions of cascade, you could have a linear cascade or an annular cascade. And basically, a cascade would have a set of blades, which are arranged, set of similar blades, which are all arranged in a certain fashion at a certain angle, which we have referred to as the stagger angle. And cascade analysis forms a very fundamental analysis of design of turbo machines, whether it is compressors or turbines. So, cascade basically consists of an array of stationary blades and constructed basically for measurement of performance parameters. And what is usually done is that we would like to eliminate any three dimensional effects which are likely to come up in a cascade and one of the sources of three dimensionality is the presence of boundary layer. So, one would like to remove boundary layer from the end walls of a cascade and so that is a standard practice that one would have porous end walls through which boundary layer flow fluid can be removed to ensure two dimensionality of the flow entering into a cascade. Now, it is also a standard assumption that radial variations in velocity field can be kind of eliminated or ignored and cascade analysis is primarily meant to give us some idea about the amount of blade loading that particular configuration can give us as well as the losses in total pressure that one can measure from a cascade analysis. So, and in turbine cascades testing also involves wind tunnels which are very similar to what we had discussed for compressors. I had shown you cascade wind tunnels when we were discussing about cascades in the context of compressors. In turbine cascades are also tested in similar wind tunnels and just that in a case of turbine since they are operating in an accelerating flow, there is a requirement of a certain pressure drop across a turbine. So, therefore, the wind tunnel is required to generate sufficient pressure which can be expanded through a turbine cascade. Now, turbine blades as you are probably aware would are likely to have much higher camber than compressor cascades or compressor blades and turbine cascades are set at a negative stagger unlike in compressor blades, something I will explain when we take up a cascade schematic in detail. Now, cascade analysis will basically give us, as I mentioned, two parameters besides a set of other parameters like boundary layer thickness and losses, etcetera. The most fundamental parameter we would like to look at from the cascade analysis is this surface static pressure distribution or C P distribution, which is related to the loading of the blade. And the second aspect is the total pressure loss across the cascade, which is is yet another parameter that one would like to infer from the cascade analysis. Now, let us take a look at a typical cascade turbine cascade nomenclature. I think I mentioned in the beginning that all the terms that we have used for compressors will, it is the same nomenclature that we apply for a turbine as well. Just that the way the blades are set, so the blade geometry they are quite different between compressors and turbines. So, if you look at a typical compressor cascade, these are the blades you can immediately see that these blades have much higher turning or camber than compressor blades. So, it is a set of these blades which are arranged either linearly or in an annular fashion which constitute a cascade. So, these blades are set apart by a certain distance which is as you can see denoted by pitch or spacing and these blades are set at a certain angle which is called the blade setting or the stagger angle. So, you can see this lambda which you see here refers to the blade setting or stagger angle. The blades have a certain camber which is basically the angle subtended between the tangent to the camber line at the leading edge and that at the tail trailing edge. So, the difference difference between that gives us the blade camber. Now, the flow enters the cascade at a certain angle, you can see that inlet blade angle is given here as beta 1 and the blade outlet angle is beta 2. Now, so if there is a difference between the blade angle and the flow angle at the inlet that is basically the incidence which is denoted by I here. So, this is the incidence angle. Similarly, a difference between the blade outlet angle and the outflow angle is the deviation which is denoted by delta. So, at the exit you may have a flow deviation at the inlet one may have an incidence. And if you draw a normal or at normal to the tangent at the trailing edge and take it to the next adjacent blade, the suction surface of the adjacent blade. So, this distance that you see here is basically referred to as the throat or opening at the turbine exit and that is here denoted by a symbol o. The blade chord as you already know is denoted by C. And then the blades also would have a certain finite thickness at the trailing edge. So, that is denoted here by the trailing edge thickness. So, the blades practically will have a certain amount of finite thickness and that is what is denoted here as the thickness at the trailing edge. So, these are the fundamental nomenclatures, nomenclature that is used in turbine, a very similar aspect was also used in compressor, where we had defined all these different parameters like incidence and deflection, deviation and blade angle, the camber, pitch, stagger all of them were defined. The difference is of course, the blade, the way the blades are set, this is set at a negative stagger as you can see, the compressor cascade if you go back, you will see that the way the blades are set is opposite to what you see in the case of a turbine. That is basically to ensure that the flow passage gives you the required amount of flow turning and also the flow acceleration in the case of turbine cascades and in compressor cascade, the setting is to ensure that you get deceleration in a compressor. So, having understood the fundamental nomenclature of a turbine cascade, we would now take a closer look at the different aspects of flow through a cascade and I would be deriving well not really a detailed derivation, but I would just give you some idea about how one can calculate the lift developed by a certain cascade, turbine cascade in two different cases. One is if we do not assume any losses or if it is an inviscid analysis and followed by a viscous analysis, one of course, would also get a drag in the case of a viscous analysis, how one can calculate the lift and of course, that is basically related to the loading of the blades eventually. So, the basic idea of cascade analysis is that just like in the case of an airfoil, because cascade is in some sense an airfoil analysis, we can determine the lift and drag forces acting on the blades. And this analysis as I mentioned can be carried out using both these assumptions of potential flow or inviscid analysis or by considering viscous effects in a rather simplistic manner. So, we will assume that the mean velocity which we are going to denote as V subscript M makes an angle of alpha subscript M with the axial direction. What we will do is to determine the circulation developed on the blades and subsequently the lift force. In the inviscid analysis, obviously there is no drag and there is only a lift force, which lift is the only force acting on the blade in the case of an inviscid analysis. When you take up a viscous analysis, there are two components of the force and the resultant force, they will lift and they drag. So, this is the geometry we are considering for an inviscid flow through a turbine cascade. If you take a look at two different stream lines, let us say this is one stream line and another stream line, which is bounding one particular blade that is shown here. These are the two different stream lines, what we are going to do is to find the circulation produced over this particular airfoil which is currently an airfoil here and then relate that to the lift developed on this particular blade. So, the inlet flow entering the cascade is V 1 and the flow exiting the blade is V 2 and of course, we will assume a mean velocity of V m which makes an angle alpha m with the axial direction. So, if this is the case and this is how you can take a look at the circulation axis. So, this is the axis along which we are calculating the circulation and therefore, this is the lift acting on this particular blade. Since it is a turbine blade, you know that this is basically the direction in which the lift is going to act. So, the mean velocity that is shown here by vector V m acts in this direction. This is the inflow velocity V 1 and this is the exit velocity V 2. So, circulation that is denoted by capital lambda here is equal to S multiplied by the difference in the tangential velocities V w 2 minus V w 1 and lift is related to the circulation, which is the product of density times the mean velocity and the circulation. Therefore, lift acting when there are no other effects considered like viscous effects, then the lift acting here would be simply the product of rho times V m into the circulation which is S into V w 2 minus V w 1. So, this is expressed in a non-dimensional form which we refer to as the lift coefficient. So, C L here is lift divided by half rho V m square into C and this is equal to rho into V m into S V w 2 minus V w 1 by half rho V m square into C. So, this can be related to the angles the across the cascade and so, we can simplify this lift coefficient as 2 into S by C into tan alpha 2 minus tan alpha 1 multiplied by cos alpha m. So, this is the this is basically the lift coefficient on a turbine blade assuming that the flow is inviscid. Now, what happens if there are viscous effects? The primary effect of viscous flow on the flow through a turbine cascade is the fact that viscous effects will manifest themselves in the form of pressure losses, total pressure losses. And therefore, the wake from the blade trailing edge will lead to a non-uniform velocity leaving the blades. In the previous analysis, we were assuming uniform velocity entering the blades and uniform velocity leaving the blades, because it is a potential flow. So, here in the case of viscous analysis, in addition to lift, one would also have a drag, which will also contribute to lift in some way or the other. So, the effective force acting on the blade will be a resultant of both the lift as well as the drag acting on the blade. So, we now define what is known as a total pressure loss coefficient, we have defined a similar parameter for compressors as well. So, this is denoted by omega bar, because there is a total pressure loss taking place across the blades as a result of the viscous effects. So, omega bar is equal to P 0 1 minus P 0 2 divided by half rho V 2 square. So, this is the loss in total pressure across the turbine cascade. So, the schematic I had shown earlier now gets modified, because you have a set of uniform stream lines entering the turbine cascade, but as they leave the cascade you can see that they have become non-uniform basically at the trailing edge where there is a wake. So, this what is shown here schematically is the these are the different wakes of all these blades that are present here. So, there is a difference in the forces acting on the blade as a result of this non-uniformity in the velocity at the at the exit of the turbine cascade. So, in this case, we can calculate drag as equal to the losses, we can relate the drag to the losses total pressure losses omega bar into S into cos alpha m. And therefore, the effective lift will now be equal to the sum of the lift as well as the component of drag in that effective direction that is omega bar into s into cos alpha m and lift we know is the product of density and the mean velocity and the circulation. So, that is rho V m into delta plus omega bar s cos alpha 1 alpha m. Therefore, the lift coefficient in this case will get modified as twice into S by C tan alpha 2 minus tan alpha 1 cos alpha m plus the drag component C D times tan alpha m. So, this is the manner image we can calculate lift coefficient for both these cases. One is for the case without viscous effects and the second is if we consider viscous effects. So, the basic idea of calculating these coefficients was to calculate, also calculate the blade efficiency. So, based on the calculation of the lift and drag coefficients, we can now calculate the blade efficiency, which is basically the ratio of ideal static pressure drop to obtain a certain degree of kinetic change, energy change to the actual static pressure drop, which will produce the same change in kinetic energy. Therefore, the blade efficiency is in, I have of course, the derivation of the blade efficiency, but it can be related to the lift and drag coefficients like blade efficiency is 1 minus C d by C l tan alpha m divided by 1 plus C d by C l cot alpha m. And if you were to neglect the drag term in the lift definition, because C d the drag term is usually much smaller in comparison to the lift. The blade efficiency is simply 1 by 1 plus 2 into C d divided by C l sin into twice alpha m. So, this basic idea of calculating the lift and drag coefficients was also to calculate the blade efficiency, which is basically a function of C d C l and the mean angle alpha m. So, let me now quickly recap our discussion in today's class. We had taken up three distinct topics for discussion. One was the different types of turbine configuration, axial turbine configurations, the impulse and the reaction turbine stages and we have done we have had a look at the velocity triangles and how you can calculate the work ratio for impulse and reaction turbine stages. We also carried out the work and stage dynamics. We have looked at these different components or configurations of axial turbines and how we can go about determining the work ratio for these two configurations of axial turbine. And then we had some discussion on turbine cascades and calculation of lift and drag a typical turbine configuration and how we can use that information to calculate the blade efficiency from simple turbine cascade testing. That is, simple cascade testing can actually give us some idea about the blade efficiency that this kind of a blade configuration can give us. So, that brings us to the end of this lecture. We will continue our discussion on axial turbines in the next lecture as well, where we will primarily be talking about the performance parameters, degree of reaction losses as well as efficiency of axial turbines, and where we will also take up detailed discussion on what are the different losses in a two dimensional sense, and how we can define efficiency, and you will see that there are different ways of defining efficiency for a turbine. So, we will take up some of these topics for discussion in the next class.", "transcription_large_v3": " Hello and welcome to lecture number 20 of this lecture series on Turbomachinery Aerodynamics. We have we are probably half way through this course and I guess you must have had some good idea about what in what is involved in turbo machinery analysis and what is involved in design of different types of turbo machines especially the compressors. Now starting the last lecture onwards we are now looking at the axial turbines and of course, subsequently we will also be talking about the radial turbines and so on. So, I think in the last class you must have had got some introduction to what axial turbines are and what constitutes axial turbines and so on. So, let us take that discussion little bit further in today's class, where we will be talking about a two dimensional analysis of axial compressors, well axial turbines in a very similar fashion to what we had discussed for axial compressors. If you remember during one of the initial lectures, probably the lecture second lecture or the third lecture, we had been talking about axial compressors and how one can analyze axial compressors in a two dimensional sense. So, we will carry out a similar analysis and discussion in today's class about how the same thing can be carried out for turbines, axial turbines in particular. In today's class, we basically being going to talk about the following topics. We will initially discuss have some introduction to axial turbines, turbines in general. We will talk about impulse and reaction turbine stages, which are the two basic types of axial turbines. We will then talk about the work and stage dynamics, how you can calculate work done by a turbine and how is it different for impulse and reaction turbines. We will then spend some time on discussion about turbine blade cascade. We will assume that the nomenclature we had used in the case of compressors will still be valid, but of of course, we will just highlight some simple differences between a compressor cascade and a turbine cascade, but the nomenclature remains the same in the sense that what we had called as camber or stagger or incidence all that remains the same for a turbine. So, I will probably spend lesser time discussing about those and take up some more topics on cascade analysis, which we had not really covered in detail in compressors. Now, when we talk about turbines, you must have had some discussion or some introduction to the different types of turbines. As we know, there are different types of compressors like axial and centrifugal. Similarly, we have different types of turbines as well. Now, in a turbine, just like in a compressor, we have different components. In a compressor, you know that we have a rotor followed by a stator. In the case of turbines, we have a nozzle or a stator which precedes a rotor. So, a nozzle or a stator guides and accelerates the flow into a into a rotor and of course, the work extraction takes place in the rotor and which is unlike in a compressor, where it is the rotor which comes first and drives the flow and then that goes into a stator which again turns it back to the axial direction and so on diffusion takes place in both the rotor and the stator. In a turbine as well you could have differential amounts of acceleration or pressure drop taking place in the rotor and the stator and there are certain types of turbines where the entire pressure drop takes place only in the stator this the rotor does not contribute to any pressure drop, it simply deflects the flow. These are called impulse turbines. We will discuss that in little more detail in some of these later slides. So, basically the flow in a turbine is accelerated in nozzle or a stator and it then passes through a rotor. In a rotor, the working fluid basically imparts momentum to the rotor and basically that converts the kinetic energy to power output. Now, depending upon the power requirement this process obviously is repeated in multiple stages, you would have number of stages which will generate the required work output, which is also similar to what you have in a compressor, where you might have multiple stages which basically are meant to give you the required pressure rise in a typical axial compressor. Now we have seen this aspect in compressors as well that due to the motion of the rotor blades you have basically two distinct components or types of velocities. One is the absolute component or type of velocity and the other is the relative component or relative velocity. This was also discussed in detail in compressors and so you would in a turbine analysis that we will do by analyzing the velocity triangle, you will see that there are these two distinct components which will become obvious when we take up velocity triangles and this very similar to what you had discussed in compressor. So, if you have understood velocity triangle construction for an axial compressor, it is pretty much the same in the case of a turbine as well. So, that will that is probably the reason why it will make it simpler for you to understand the construction of a velocity triangle. Now, the fundamental difference between compressor and a turbine is the fact that a compressor is required to generate a certain pressure rise. There is a work input into the compressor, which is what is used in increasing the pressure across the compressor. Compressor operates in a adverse pressure gradient mode, that is the flow always sees an increasing pressure downstream. In the case of a turbine, it is not that case, it is the other way around that the flow always sees a favorable pressure gradient, because there is a pressure drop taking place in a turbine, which leads to a, which is how the turbine extracts work from the flow, that is it converts part of the kinetic energy, which the flow has into work output. And therefore, in a turbine, the flow always sees a favorable pressure gradient and that is one fundamental difference between a turbine and a compressor. Now, because you have a favorable pressure gradient, the problems that we have seen in the case of compressor like flow separation and blade stall and surge and all that does not really affect a turbine, because a turbine, the flow is always in a accelerating mode and so the problem of flow separation does not really limit the performance of a turbine. So, it is possible that we can extract lot more work per stage in a turbine as compared to that of a compressor. And therefore, you would if you have noticed a schematic of a typical modern day jet engine, you will find that there are numerous stages of compressors may be 15 or 20, which are actually driven by may be 2 or 3 stages of turbine. So, each stage of a turbine can actually give you much greater pressure drop than what we can achieve or the kind of pressure rise we can achieve in one stage of a compressor, which is why a single stage of a turbine can drive multiple stages of compressor. So, that is a very important aspect that you need to understand, because the fundamental reason for this being the fact that turbines operate in a favorable pressure gradient, compressors operate in an adverse pressure gradient. So, there are limitations in a compressor, which will prevent us from having very high values of pressure rise per stage. That is not a limitation in a turbine and that is why you have much greater pressure drop taking place in a turbine as compared to that of the pressure rise that you get from one stage of a compressor. So, turbines like compressors can be of different types. Compressors we have seen can be either axial or centrifugal. In the case of turbines, you can in fact, in in the some literature which also says they could also have mixed type of compressors axial and centrifugal mixed. Similar thing is also there in the case of turbine, you could have an axial turbine or a radial turbine or a mixture or combination of the two called mixed flow turbines. Axial turbines obviously can handle large mass flows and obviously are more efficient as very similar analogy we can take from compressors, which have larger mass flow and are obviously more efficient. Axial turbine main advantage is that it has the same frontal area of that of a compressor and also it is possible that we can use an axial turbine with that of a centrifugal compressor. So, that is also an advantage and what is also seen is that efficiency of turbines are usually higher than that of compressors. The basic reason again is related to the comment I made earlier that turbines operate in a favorable pressure gradient and so the problems that flow sees in an adverse pressure gradient is not seen. There are no problems of flow separation except in some rare cases and this also means that theoretically turbines are easier to design. Well, easier is in quote and uncoat, well in the sense that you know compressors require little more care in terms of aerodynamic design, but of course turbines have a different problem because of high temperatures and so turbine blade cooling and associated problems that is an entirely different problem altogether. So, aerodynamically if you have to design a compressor and a turbine, turbines would be a tad easier to design than compressors, just because of the fact that you do not have to really worry about the chances of flow separation across a turbine, because it is always an accelerating flow. In the case of compressors, that is not the case and there is always a risk that a compressor might enter into stall. So, let us now take a look at, now that I have spoken a lot about types of turbines and their functions and so on. Let us take a look at a typical axial turbine stage. So, what is shown here is a simple schematic of an axial turbine stage. So, an axial turbine stage consists of, as I mentioned, a nozzle or a stator followed by a rotor. So, this is just representing a nozzle through which hot gases from the combustion chamber are expanded and then that passes through a rotor, which is what gives us the power output. Rotor is mounted on what is known as a disk and of course, the flow from the rotor is exhausted into either a next stage or through the component downstream, which could be a nozzle in the case of an aircraft engine. So, usually we would be denoting the stator inlet as station 1, stator exit as station 2 and rotor exit as station 3. In some of the earlier generation turbines, the disk was a separate entity, rotor was mounted on slots which were provided on the disk and so those separate mechanism for mounting rotor blades on the disk. Some of the modern day, so it was very soon realized that having separate disk and different blades, obviously will increase the number of parts. So, the part count will increase tremendously. So, but with modern day manufacturing capabilities in terms of 5 axis and 7 axis numerical machines called CNC machines, computer guided machines, it is possible for us to make them out of a single piece. And this is done in smaller sized engines now, and some of the companies have their own names for that. For example, GE calls such a disc, which is combination of disc and the blade as blisk. Blisk means blade and disc together machined out of a single piece of metal. And similarly, their also have their own terminologies like Patten Whitney calls it integrated blade rotor or IBR, where there is no distinct root fixture for a blade, because a blade and the disk are a single component. The main advantage being that you have reduced significantly the number of parts, whereas you would have let us say typical turbine blade may have something like 70 to 80 blades or even more of course, mounted on a disc. So, that is like 80 to 90 parts for one stage of a rotor. Now, if you have a blisk, you have just one component because all the blades have been mounted on one disc. That is a tremendous advantage for in terms of maintenance aspect, but at the same time, the primary disadvantage is the fact that if there is one blade which gets damaged, in the earlier scenario you just have to replace the blade. Here it becomes impossible to replace the blade and so then of course, you will have to do a rebalancing of the disc and if the damage is severe, then the whole disc has to be replaced. Of course, there are pros and cons of having an integrated blade rotor concept and of course, there are lot of disadvantages and advantages, but that for at least smaller engines economically that is in the long run that seems to be an advantage that you have a combination of the blade and the disc. So, having understood some of the fundamentals of turbines, let us move on to the more important aspect of analysis, the two dimensional analysis that is to do with velocity velocity triangles. I think we spent quite some time discussing velocity triangles for compressors. So, I will assume that you have understood the fundamentals of velocity triangles and try to kind of move on to constructing the velocity triangles just like that unlike in compressors where I had done it step by step. The process is exactly the same as what you had done for a compressor, but of course, it being a turbine there are subtle differences, which you need to understand. Now, velocity triangle analysis is an elementary analysis and this is elementary to axial turbines as well, just like in the case of compressors. Now, the usual procedure for analysis is to carry out this analysis at the mean blade height and we will have a blade speed at that height assuming to be U, capital U, absolute component of velocity we will denote by C and relative component we will denote by V. And the axial velocity, the absolute component of that is obviously denoted by C subscript a just like in compressors, tangential components will be denoted by a subscript w. So, C w is absolute component of tangential velocity, V w is the relative component in the tangential direction. And regarding angles, alpha will denote the angle between the absolute velocity and the axial direction and beta denotes the corresponding angle for relative velocity. So, these are the terminologies nomenclature that we have used even in a compressor, we will follow exactly the same nomenclature in the case of turbine as well. So, let us move directly to a velocity triangle of a typical turbine stage. So, turbine stage as we have already seen consists of a rotor, well a stator or a nozzle. It is usually referred to as a nozzle in the case of turbine, because the flow is accelerated in a stator of a turbine and that is why it is called a nozzle. And then you have a rotor which follows a stator or a nozzle. Now, inlet to stator is denoted as station 1, exit is denoted as station 2, exit of the rotor is station 3. So, let us say there is an inlet velocity which is given by C 1, which is absolute velocity entering at an angle alpha 1, it exits the stator or nozzle with a highly accelerated flow, which is C 2. You can see that C 2 is much higher than C 1 and that is exactly the reason why this is called a nozzle. Now, at the rotor entry, we also have a blade speed u. Please note that the direction of this vector u is from the pressure surface to the suction surface, unlike in compressor where it was the other way around. Here the flow drives the blades and that is why you have the blade speed which is in this direction. This is the absolute velocity entering the rotor, and relative velocity will be the vector sum of these two or vector difference between these two and that is given by V 2. Alpha 2 is the angle which C 2 makes with the axial direction, beta 2 is the angle which V 2 makes with the axial direction. And just like we have seen in compressors, V 2 enters the rotor at an angle which is tangential to the camber at the leading edge. This is to ensure that the flow, this is obviously when the incidence is close to 0 to ensure that the flow does not separate. At the rotor exit, we have V 3. V 3 is less than V 2 as you can see and of course, that also depends upon the type of the turbine whether it is impulse or reaction and you also have C 3 here and this is the blade speed u, beta 3 is the angle which B 3 makes with the axial direction, alpha 3 is the angle which the absolute velocity C 3 makes with the axial direction. So, if you now come go back to the earlier slide of lecture 2 or 3, where we had discussed about velocity triangles for an axial compressor, you can quite easily see the similarities as well as the differences. So, I would strongly urge you to compare both these velocity triangles by keeping them side by side. So, you can understand the differences between a compressor and a turbine. At the same time, you can also try to figure out some similarities between these two components. And so, it is very necessary that you understand clearly both the differences as well as the similarities from a very fundamental aspect that is the velocity triangle point of view. So, this is a standard velocity triangle for a typical turbine stage. I have not really mentioned here what kind of a turbine it is, whether it is impulse or reaction. We will come to that classification very soon and you will see that there are different ways in which you can express the velocity triangle for both of these types of turbines. So, let us now try to take a look at the different types of turbines. I mentioned in the beginning that there are two different configurations of axial turbines that are possible, impulse and the reaction turbine. In an impulse turbine, the entire pressure drop takes place in the nozzle and the rotor blades would simply deflect the flow and would have a symmetrical shape. So, there is no acceleration or pressure drop taking place in the rotor in an impulse turbine. So, the rotor blades would simply deflect the flow and guide it to the next nozzle, if there is one present. In a reaction turbine, on the other hand, the pressure drop is shared by the rotor as well as the stator, and the amount of pressure drop that is shared is defined by the degree of reaction, which we will discuss in detail in the next lecture. Now, which means that the degree of reaction of an impulse turbine would be 0, because the entire pressure drop drop has already taken place in the stator, the rotor does not contribute to any pressure drop and so the degree of reaction for an impulse turbine should be 0. So, these are two different configurations of axial turbines, which are possible and what we will do is that we will take a look at their velocity triangles also, but before that we need to also understand the basic mechanism by which work is done by a turbine. Now, if you were to apply angular momentum equation for an axial turbine, what you will notice is that the power generated by a turbine is a function of well three parameters, one is of course, a mass flow rate, the other parameters are the blade speed and the tangential component of velocity, the absolute velocity. So, if you apply angular momentum at the inlet and exit of the rotor, then the power generated by the turbine is equal to mass flow rate multiplied by u 2 into C w 2, which is the product of the blade speed and the tangential velocity absolute at the inlet of the rotor minus u 3 times C w 3, which is again blade speed at rotor exit and multiplied by the tangential component of the absolute velocity at the rotor exit. Now, we would normally assume that the blade speed is does not change from at a given radial plane and therefore, U 2 can be assumed to be equal to U 3 and therefore, the work done per unit mass would now be equal to blade speed that is U multiplied by C w 2 minus C w 3 or which is also equal to the from the thermodynamics point of view, there is a stagnation pressure stagnation temperature drop taking place in a turbine, because the turbine expands the flow and work is extracted from the turbine and therefore, there has to be a stagnation temperature drop taking place in a turbine. Therefore, the enthalpy difference between the inlet and exit of the turbine would basically be equal to the work done by or work developed by this particular turbine. So, work done per unit mass is also equal to C p times T 0 1 minus T 0 3, where this is basically the enthalpy difference. C p T 0 1 is enthalpy at inlet of the turbine, C p T 0 3 is the enthalpy at the exit of the turbine. Let us now denote delta T 0, which basically refers to the stagnation temperature. The net change in stagnation temperature in the turbine delta T naught is equal to T 0 1 minus T 0 3, which is also equal to T 0 2 minus T 0 3, because 1 to 2 is the stator and there cannot be any change in stagnation temperature in the stator. Therefore, T 0 1 minus T 0 3 is equal to T 0 2 minus T 0 3. So, we now define what is known as the stage work ratio, which is basically delta T naught by T 0 1 and that is equal to U times C w 2 minus C w 3 divided by C p times T 0 1. So, this basically follows from these two equations here, which correspond to the work done per unit mass, one is in terms of the velocities and the other is in terms of stagnation temperatures. So, a similar analysis was also carried out when we were discussing about axial compressors and where also we had kind of equated the work that the flow does on well work done by the compressor on the flow as compared to the stagnation temperature rise taking place in a compressor as a result of the work done on the flow. So, there also we had defined the pressure rise or pressure ratio per stage in terms of the temperature rise across that particular stage and the velocity components which come from the velocity triangles. Now what you can see here is that the turbine work per stage would basically be limited by two parameters. One is the pressure ratio that is available for expansion and of course, the other aspect is the allowable the amount of blade stress and turning that is physically be possible for one to achieve in in the case of a particular turbine. So, there are two is one being the available pressure ratio and the other is the allowable blade stress and turning that one can achieve in a particular turbine configuration. So, in unlike in a compressor where we also had the issue of boundary layer behavior, because the flow was always operating in an adverse pressure gradient mode in compressors, in a turbine the pressure gradient is favorable. So, boundary layer behavior is generally something that can be controlled and there are normally not much issues related to boundary layer separation or growth of boundary layer and so on. Of course, there are certain operating conditions under which certain the stages of turbines may undergo local flow separation, but that is only for short durations. In general, in a favorable pressure gradient boundary layers generally tend to be well behaved. Now, the turbine work ratio that we had seen in the previous slide is also often defined in as a ratio between the work done per unit mass divided by the square of the blade speed. Therefore, W t by U square which is also equal to the enthalpy rise or rather enthalpy drop in the case of turbine divided by U square which is basically equal to delta C w divided by u or net change in the tangential velocity absolute divided by the blade speed. Now, this is an important parameter because based on this we can understand the differences between an impulse turbine and a reaction turbine which is what we are going to do next to take a look at what are the fundamental differences besides of course, the fact that in an impulse turbine flow is the entire pressure drop takes place only in the nozzle and in reaction turbine that is shared between the nozzle and the rotor. Let us take up an impulse turbine first and we will take a look at the velocity triangles for an impulse turbine and then try to find out the work ratio per stage of an impulse turbine and relate it to some parameters which we get from the velocity triangles. So, here we have a typical impulse turbine stage, a set of a row of nozzle blades followed by a row of rotor blades. And so, flow is accelerated in the nozzle and so, the velocity that reaches the rotor, the absolute component is C 2 and at an angle of alpha 2 with the axial direction and as a result of the blade speed u, the relative velocity which enters the rotor is V 2, which is at an angle of beta 2 with the axial direction. And in an impulse turbine, I mentioned that the rotor simply deflects the flow and there is no pressure drop taking place in the rotor and therefore, at the exit of the rotor we have V 3 which is at an angle of beta 3 and by virtue of the symmetry of the blades we will have beta 2 is equal to minus beta 3 and velocity in magnitude V 2 would be equal to V 3. So, which we can also see from the velocity triangle shown here, C 2 is the absolute velocity entering the rotor, V 2 is the relative velocity and the corresponding angles here alpha 2 and beta 2. Now, in the rotor we have V 3 which is equal to V 2 in magnitude, but at an angle which is different from the inlet that is beta 3 will be negative of beta 2 in the other direction. Absolute velocity leaving the blade is C 3. Now, if you look at the other components of velocities like this is the axial component of the absolute velocity C a and the corresponding tangential components of the relative velocity which are obviously equal and opposite in direction like V w 2 and V w 3. You can see that these are equal in magnitude, but of course, the directions are opposite because V 2 and V 3 are in opposite directions and C w 2 is the absolute component of the well tangential component of the absolute velocity at inlet C w 3 that at the exit of the rotor. So, this is a typical velocity triangle of an impulse turbine stage and if you if you take a closer look at the velocity triangles, I had mentioned that the angles beta 3 beta 2 are equal in magnitude, but they are different by the their orientation. So, beta 3 is equal to minus beta 2, which means that we have V w 3 is equal to minus V w 2. And the difference in the tangential component of the absolute velocity C w 2 minus C w 3 will be equal to twice of V w 2. So, let us take a look at the velocity triangle again. C w 2 is this minus C w 3 is equal to the sum of V w 2 and V w 3 and since they are equal we have that is equal to twice of V w 2, which is also equal to 2 into C w 2 minus u or this is equal to 2 u into C a by u tan alpha 2 minus 1. So, that is again coming from the velocity triangles. You can see that C a tan alpha 2 is this component minus u is equal to twice of this. So, the difference between the tangential component of the absolute velocity C w 2 and C w 3 that is delta C w for an impulse turbine will be equal to 2 u into C a by u tan alpha 2 minus 1. Therefore, the work ratio that we have defined earlier for an impulse turbine that is delta H naught by U square is equal to 2 U into C A by U tan alpha 2 minus 1. We will now take a look at what happens in the case of an of a reaction turbine and calculate the work ratio as applicable for a reaction turbine and see the is there a difference fundamentally in the work ratio of an impulse turbine and a reaction turbine. Now, let us take a look at a typical 50 percent reaction turbine just for simplicity. The reason why we took up a 50 percent reaction turbine is because in a 50 percent reaction turbine, the pressure drop is shared equally between the nozzle and the rotor. And therefore, velocity triangles as you can see are mirror images of one another. The velocity triangle at the inlet of the rotor is this, where this is C 2 the absolute velocity coming in from the rotor from the nozzle, V 2 is the relative velocity and this is the blade speed. And since they are mirror images at the exit of the rotor you have V 3 and C 3 and therefore, you can clearly see that C 2 will be equal to V 3 and V 2 will be equal to C 3 corresponding the angles alpha 2 will be equal to beta 3 and beta 2 will be equal to alpha 3. So, this is true only for a 50 percent reaction turbine. For any other reaction stages of course, the velocity triangles need not necessarily be symmetrical and this is also assuming that the axial velocity is does not change across the rotor and the nozzle. Now, for this kind of a reaction turbine, which is having a degree of reaction of 0.5, since the velocity triangles are mirror images or symmetrical, if you assume constant axial velocity, we have C w 3 is equal to minus C a tan alpha 2 minus u. And therefore, the turbine work ratio would basically be equal to twice into twice of C a by u tan alpha 2 minus 1. This we can compare with that of the impulse turbine, where it was 2 u multiplied by C a by u tan alpha 2 minus 1. So, you can immediately see that there is fundamental difference between the work ratio as compared to a turbine, which is impulse or in this case of course, example was for a 50 percent reaction turbine. So, there is a fundamental difference between the work ratio as applicable for an impulse turbine as compared to that of a 50 percent reaction turbine and in general for any reaction turbine as well. Now this was as far as the different types of turbine configurations were concerned and how one can analyze these turbine configurations and what are the fundamental differences between let us say an impulse turbine and a reaction turbine and how one can from the velocity triangle estimate the work ratio that or the work done by these kind of turbine stages. So what I was suggesting right at the beginning was that you can clearly see differences between the compressors and turbines by looking at the velocity triangle for these two different cases and comparing them to understand the fundamental working of compressors and turbines and what makes them two different components. What we can take up next for discussion is something we have discussed in detail for compressors as well, that is to do with a cascade. And as you have already seen, a cascade is a simplified version of rotating machine, and you could have different versions of cascade, you could have a linear cascade or an annular cascade. And basically, a cascade would have a set of blades, which are arranged, set of similar blades, which are all arranged in a certain fashion at a certain angle, which we have referred to as the stagger angle. And cascade analysis forms a very fundamental analysis of design of turbo machines, whether it is compressors or turbines. So, cascade basically consists of an array of stationary blades and constructed basically for measurement of performance parameters. And what is usually done is that we would like to eliminate any three dimensional effects, which are likely to come up in a cascade. And one of the sources of three dimensionality is the presence of boundary layer. So, one would like to remove boundary layer from the end walls of a cascade. And so, that is a standard practice that one would have porous and balls through which boundary layer fluid can be removed to ensure two dimensionality of the flow entering into a cascade. Now, it is also a standard assumption that radial variations in velocity field can be kind of eliminated or ignored and cascade analysis is primarily meant to give us some idea about the amount of blade loading that a particular configuration can give us as well as the losses in total pressure that one can measure from a cascade analysis. So, and in turbine cascades testing also involves wind tunnels which are very similar to what we had discussed for compressors. I had shown you cascade wind tunnels when we were discussing about cascades in the context of compressors. In turbine cascades are also tested in similar wind tunnels and just that in a case of turbine, since they are operating in an accelerating flow, there is a requirement of a certain pressure drop across a turbine. So therefore, the wind tunnel is required to generate sufficient pressure, which can be expanded through a turbine cascade. Now, turbine blades as you are probably aware would are likely to have much higher camber than compressor cascades or compressor blades and turbine cascades are set at a negative stagger unlike in compressor blades, something I will explain when we take up a cascade schematic in detail. Now, cascade analysis will basically give us as I mentioned two parameters besides the set of other parameters like boundary layer thickness and losses etcetera. The most fundamental parameter we would like to look at from the cascade analysis is this surface static pressure distribution or C p distribution, which is related to the loading of the blade and the second aspect of this is the total pressure loss across the cascade, which is yet another parameter that one would like to infer from the cascade analysis. Now, let us take a look at a typical cascade, turbine cascade nomenclature. I think I mentioned at the beginning that all the terms that we have used for compressors will, it is the same nomenclature that we apply for a turbine as well, just that the way the blades are set or the blade geometry, they are quite different between compressors and turbines. So, if you look at a typical compressor cascade, these are the blades. You can immediately see that these blades have much higher turning or camber than compressor blades. So, it is a set of these blades, which are arranged either linearly or in an annular fashion, which constitute a cascade. So, these blades are set apart by a certain distance, which is as you can see denoted by pitch or spacing, and these blades are set at a certain angle, which is called the blade setting or the stagger angle. So, you can see this lambda, which you see here refers to the blade setting or stagger angle. The blades have a certain camber, which is basically the angle subtended between the tangent to the camber line at the leading edge and that at the tail trailing edge. So, the difference between that gives us the blade camber. Now, the flow enters the cascade at a certain angle. You can see that inlet blade angle is given here as beta 1 and the blade outlet angle is beta 2. Now, so if there is a difference between the blade angle and the flow angle at the inlet that is basically the incidence which is denoted by I here. So, this is the incidence angle. Similarly, a difference between the blade outlet angle and the outflow angle is the deviation which is denoted by delta. So, at the exit you may have a flow deviation at the inlet, one may have an incidence. And if you draw a normal or at normal to the tangent at the trailing edge and take it to the next adjacent blade, the suction surface of the adjacent blade. So, this distance that you see here is basically referred to as a throat or opening at the turbine exit and that is here denoted by a symbol O. The blade chord as you already know is denoted by C and then the blades also would have a certain finite thickness at the trailing edge. So, that is denoted here by the trailing edge thickness. So, the blades practically will have a certain amount of finite thickness and that is what is denoted here as the thickness at the trailing edge. So, these are the fundamental nomenclatures nomenclature that is used in turbine, very similar aspect was also used in compressor, where we had defined all these different parameters like incidence and deflection deviation and blade angle, the camber, pitch, stagger, all of them were defined. The difference is of course, the way the blades are set. This is set at a negative stagger as you can see the compressor cascade. If you go back, you will see that the way the blades are set is opposite to what you see in the case of a turbine. That is basically to ensure that the flow passage gives you the required amount of flow turning and also the flow acceleration in the case of turbine cascades and in compressor cascade, the setting is to ensure that you get deceleration in a compressor. So, having understood the fundamental nomenclature of a turbine cascade, we would now take a closer look at the different aspects of flow through a cascade and I would be deriving well not really a detailed derivation, but I would just give you some idea about how one can calculate the lift developed by a certain cascade turbine cascade in two different cases. One is if we do not assume any losses or if it is an inviscid analysis and followed by a viscous analysis one of course, would also get a drag in the case of a viscous analysis how one can calculate the lift and of course, that is basically related to the loading of the blades eventually. So, the basic idea of cascade analysis is that just like in the case of an airfoil, because cascade is in some sense an airfoil analysis, we can determine the lift and drag forces acting on the blades. And this analysis as I mentioned can be carried out using both these assumptions of potential flow or inviscid analysis or by considering viscous effects in a rather simplistic manner. So, we will assume that the mean velocity, which we are going to denote as v subscript m, makes an angle of alpha subscript m with the axial direction. What we will do is to determine the circulation developed on the blades and subsequently the lift force. In the inviscid analysis obviously, there is no drag and there is only a lift force, which lift is the only force acting on the blade in the case of an inviscid analysis. When you take up a viscous analysis, there are two components of the force and the resultant force, the lift and the drag. So, this is the geometry we are considering for an inviscid flow through a turbine cascade. If you take a look at two different streamlines, let us say this is one stream line and another stream line, which is bounding one particular blade that is shown here. These are the two different stream lines. What we are going to do is to find the circulation produced over this particular airfoil, which is currently an airfoil here and then relate that to the lift developed on this particular blade. So, the inlet flow the entering the cascade is v 1 and the flow exiting the blade is v 2. And of course, we will assume a mean velocity of v m, which makes an angle alpha m with the axial direction. So, if this is the case and this is how you can take a look at the circulation axis. So, this is the axis along which we are calculating the circulation and therefore, this is the lift acting on this particular blade. Since it is a turbine blade, you know that this is basically the direction in which the lift is going to act. So, the mean velocity that is shown here by vector v m acts in this direction, this is the inflow velocity v 1 and this is the exit velocity v 2. So, circulation that is denoted by capital lambda here is equal to S multiplied by the difference in the tangential velocities v V w 2 minus V w 1 and lift is related to the circulation, which is the product of density times the mean velocity and the circulation. Therefore, lift acting when there are no other effects considered like viscous effects, then the lift acting here would be simply the product of rho times V m into the circulation, which is S into V w 2 minus V w 1. So, this is expressed in a non-dimensional form, which we refer to as the lift coefficient. So, C L here is lift divided by half rho V m square into C and this is equal to rho into V m into S V w 2 minus V w 1 by half rho V m square into C. So, this can be related to the angles the across the cascade and so we can simplify this lift coefficient as 2 into S by C into tan alpha 2 minus tan alpha 1 multiplied by cos alpha m. So, this is the, this is basically the lift coefficient on a turbine blade assuming that the flow is inviscid. Now, what happens if there are viscous effects? The primary effect of viscous flow on the flow through a turbine cascade is the fact that viscous effects will manifest themselves in the form of pressure losses, total pressure losses. And therefore, the wake from the blade trailing edge will lead to a non-uniform velocity leaving the blades. In the previous analysis, we were assuming uniform velocity entering the blades and uniform velocity leaving the blades, because it is a potential flow. So, here in the case of viscous analysis, addition to lift, one would also have a drag, which will also contribute to lift in some way or the other. So, the effective force acting on the blade will be a resultant of both the lift as well as the drag acting on the blade. So, we now define what is known as a total pressure loss coefficient. We had defined a similar parameter for compressors as well. So, this is denoted by omega bar, because there is a total pressure loss taking place across the blades as a result of the viscous effects. So, omega bar is equal to P 0 1 minus P 0 2 divided by half rho V 2 square. So, this is the loss in total pressure across the turbine cascade. So, the schematic I had shown earlier now gets modified, because you have a set of uniform stream lines entering the turbine cascade, but as they leave the cascade, you can see that they have become non-uniform basically at the trailing edge where there is a wake. So, this what is shown here schematically is the these are the different wakes of all these blades that are present here. So, there is a difference in the forces acting on the blade as a result of this non-uniformity in the velocity at the at the exit of the turbine cascade. So, in this case, we can calculate drag as equal to the losses. We can relate the drag to the losses, total pressure losses omega bar into S into cos alpha m. And therefore, the effective lift will now be equal to the sum of the lift as well as the component of drag in that effective direction, that is omega bar into S into cos alpha m. and lift we know is the product of density and the mean velocity and the circulation. So, that is rho V m into delta plus omega bar S cos alpha 1 alpha m. Therefore, the lift coefficient in this case will get modified as twice into S by C tan alpha 2 minus tan alpha 1 cos alpha m plus the drag component C D times tan alpha m. So, this is the manner in which we can calculate lift coefficient for both these cases. One is for the case without viscous effects and the second is if we consider viscous effects. So, the basic idea of calculating these coefficients was to calculate, also calculate the blade efficiency. So, based on the calculation of the lift and drag coefficients, we can now calculate the blade efficiency, which is basically the ratio of ideal static pressure drop to obtain a certain degree of kinetic energy change to the actual static pressure drop, which will produce the same change in kinetic energy. Therefore, the blade efficiency is in, I have of course, skipped the derivation of the blade efficiency, but it can be related to the lift and drag coefficients like blade efficiency is 1 minus C D by C L tan alpha m divided by 1 plus C D by C L cot alpha m. And if you were to neglect the drag term in the lift definition, because C D the drag term is usually much smaller in comparison to the lift. The blade efficiency is simply 1 by 1 plus 2 into C D divided by C L sin into twice alpha m. So, this the basic idea of calculating the lift and drag coefficients was also to calculate the blade efficiency, which is basically a function of C d C L and the mean angle alpha m. So, let me now quickly recap our discussion in today's class. We had taken up three distinct topics for discussion. One was the different types of turbine configuration, the axial turbine configurations, the impulse and the reaction turbine stages and we have done, we have had a look at the velocity triangles and how you can calculate the work ratio for impulse and reaction turbine stages. We also carried out the work and stage dynamics. We have looked at these different components or configurations of axial turbines and how we can go about determining the work ratio for these two configurations of axial turbine. And then, we had some discussion on turbine cascades and calculation of lift and drag for a typical turbine configuration and how we can use that information to calculate the blade efficiency from simple turbine cascade testing. That is simple cascade testing can actually give us some idea about the blade efficiency that this kind of a blade configuration can give us. So, that brings us to the end of this lecture. We will continue our discussion on axial turbines in the next lecture as well, where we will primarily be talking about the performance parameters, degree of reaction losses as well as efficiency of axial turbines. And, where we will also take up detailed discussion on what are the different losses in a two-dimensional sense and how we can define efficiency and you will see that there are different ways of defining efficiency for a turbine. So, we will take up some of these topics for discussion in the next class." }, "/work/yuxiang1234/prompting-whisper/audios-1/35mdQVR28MU.mp3": { "gt": "Welcome to lecture 6 of a high performance computing. We are in the process of looking at the MIPS 1 Instruction set in some detail. In the previous lecture, we saw that the MIPS 1 Instruction set is the 32 bit; in other words, 32 bit worth size. Load store in other words, the only instructions that take memory operands are the load and store instructions, and it is a RISC Instruction set. We saw that the 32 general purpose registers R0to R31 of which, R0 is always contains the value 0, and R31 is implicitly used by some instructions. We saw that there are 2 other additional registers called HI and LO. We saw the addressing modes which are used by the instructions, and also looked in detailed at two of the categories of instructions - the data transfer instructions, in other words, loads, stores and moves to and from the registers HI and LO as well as the arithmetic and logical instructions. We are in the process of actually looking at the table of arithmetic and logical instructions. Let me remind you that there are add and subtract instructions which can also take immediate operands, such as, the second example, and the logical operations also have the possibility of immediate operands. The immediate is are all signed 16 bit quantities which may have to be sign extended if they have to be operated on with 32 bit operand. All that means to be understood in this table, other multiply and divide instructions which have the audit that they are the only instructions in the MIPS 1 Instruction set, which do not seem to have a destination operand specified which we need to decipher. So, let me just go through an example. Here is an example of multiply instruction, multiply R1 R2. So, we suspect that R1 and R2 are the source operands of this instruction, and that what the instruction is supposed to do is to take the 32 bit value out of R1 multiplied by the 32 bit value out of R2, and the only question is what is it supposed to do with result? There are first thing we need to understand is that the result is conceivably larger than 32 bits. There in result could in fact be as large as 64 bits, and I guess all of you, many of you may have an understanding of this from the decimal equivalent. If I have to, let us suppose I am talking about decimal, if I have to two digit numbers and I multiply them together, you know that the result is in this case three digits, but in general could be as high as four digits. So, when I multiply to two digit numbers, I could get a number which is as big as 4 digits. When I multiply to two digit numbers, the result could be less than four digits. Which is why in this comment appear, I mentioned that when I multiply to 32 bit values, the result could be as large as 64 bits. This is the problem for the MIPS Instruction set, because you will remember that the size of all the general purpose registers is only 32 bits, and that is why when I do a multiplication of 2 register values, there is no guarantee that the result can fit into one of the general purpose registers, and this is why the HI and LO come into the picture. So, what happens in the case of the MIPS 1 Instruction set is that, the result of the multiplication. In other words, the product is put into the registers HI and LO, and as the name suggest, the least significant bits of the product going to the register LO and the most significant 32 bits of the product go into the register HI, and you know that both HI and LO from the previous lecture, what I told you? Both HI and LO are 32 bit registers, and therefore, the full 32 bits of the product can be stored and made available to the program. And typically you would expect that the program will have to check to see whether high is equal to 0, and if high is equal to 0, then that the full product is contained within the register LO and proceed accordingly. Now, what about the divide instruction? The divide instruction like the add instruction has only 2 operand, for example, I might divide R1 by R2, but in the case of divide, you know that the result a quotient, but there is also an other result which is the reminder. How big could the quotient be? We know the, there is no problem with the size of the quotient; there is no problem that the size of the reminder; both of them can only be as large as 32 bits, only questions where should those 2 go? So, this, if I had a divide instruction in the MIPS 1 Instruction set, problem would then be where do I, how do I specify 2 destination operands; in other words, 1 destination operand for the quotient; the other destination operand for the remainder in a single instruction. And therefore, once again to bypass this problem, they use the HI and LO registers. The quotient goes into LO and the reminder goes into HI, and this is basically all we need to know about the multiplying divide instructions. So, in effect to do a multiplication or division, one can use the single instruction but they will be follow up instructions to transfer the result, whether it be the quotient or the 32 bits of the product into one of the general purpose registers for the subsequent computation by the program. Now, with this, we have actually completed our quick look at the MIPS 1 arithmetic and logical instructions, and we will now move on to the control transfer instructions. In some sense, this is the largest and the most important category of instructions, because these are the instructions which are going to be used to create the control flow of the c program and is the part of programming which is probably the most challenging. Now, as your expect, in this table, had we want you that I am going to have to talk separately about the conditional branch instructions or branch instructions, the unconditional jump instructions. A family of instructions which are going to be used to implement function calls, which are call the jump and link instructions and some other instructions. I have actually introduce the system call instruction at the bottom and I am not going to say anything about it today. But we will come back to this instruction after 7 or 8 lectures well into the course. Therefore, please do remember that I talked about the system call instruction, and with that, I will not talk about any more today. Now, in terms of a terminology used in this table, one piece of terminology that is used is the letter R is used in this table to stand for register. In some of the notation, I have used target sub 26, and target sub 26 is referring to an absolute operand of 26 bit size. Some of the instructions have a Z in the mnemonic and Z stands for zero, and the any other notation which I have used here is that in this particular meanings column, I have used a two pipe operator by which you can understand that I mean the operation, catenate or concatenate whichever you are more used to. So, with this quick overview of the terminology, we can try to understand these instructions one by one. So, let me just remind you the conditional branch instructions or branch instructions, and B stands for branch in all of these instructions will be used to transfer control of the program depending on whether a particular condition is true or not. I suppose to the jump or unconditional branch instructions which transfer control unconditionally. So, if you look at this particular example, BLTZ R2 minus 16 and look at the explanation, it looks quite mysterious. So, just still I need to explain this little bit more detail. So, let me separately talk about the conditional branch instructions. So, we now have the same example branch if less than 0 and the meaning of it, but let me just go through these mnemonics one by one. So, we have BEQ, we need to be able to read this in a slightly friendly of fashion in BEQ, B and E, BGEZ, etcetera. We know that Z stands for zero and we expect that EQ stands for equal that the condition which we which is being tested is a test for a quality. Using that is a hint, we would guess any e stands for not equal, GE stands for greater than equal, LE for less than or equal, LT for less than and GT for greater than; which means that the kinds of conditions that are being tested by these instructions are equal which you understand in c by the equal operator. NE which is not equal greater than or equal to 0. Remember that the Z at the end of BGEZ stands for 0, less than or equal to 0, less than 0 and greater than 0. So, these mnemonic suggest that these are the conditions which are being tested whether two things are equal, whether they are not equal, whether something is greater than are equal to 0, less than or equal to 0, less than 0 or greater than 0. From this example, we understand that for the comparison with 0 instructions, the value inside the register is being compared with 0. So, this instruction basically means branch if less than 0 R2, which you could read us branch if R2 is less than 0 2 minus 16. Now, in the lead up to this, in the in the previous lecture in talking about the different addressing modes used by the MIPS 1 Instruction set, I had indicated that the branch instructions use a PC relative addressing mode. And in the case of a branch instruction, the first operand is very clearly being specified in the register director addressing mode, and therefore, the PC relative addressing mode must be referring to the second operand, this minus 16. Which is why we are not too surprised to see in the meaning column that the meaning of the minus 16 is that the way that the program counter is change. In other words, the way that the control flow of the program is modified is by taking the whole value of the program counter and subtracting 16 from it. At this point, we are not too sure why this plus four is there. That will become clearer when we talk about how the hardware to implement this instruction set could be implemented. But for the movement, we must understand that this is the meaning of the instruction. Then if I execute an instruction branch if less than 0 R2 minus 16, the meaning is that if the current value of the register R2 is less than 0, then the program counter will be modified. In other words, there will be a change of control to the program counter of the branch instruction plus 4 minus 16, and the minus 16 is because of the fact that this is PC relative addressing. So, something similar is going to happen for all of the instructions which compare with 0, but what about the instructions that do not compare with 0, in other words, the branch if equal and the branch if not equal? Once again, if there is a condition which is being checked and the a check of equality is being done, then two things must be being checked for a quality. So, the question is what is a BEQ instruction look like, and the answer is there. In the case of the BEQ and the BNE instruction, there are actually 2 operands. In fact, an example would be BEQ R1 R2 minus 16, and this instruction basically reads branch to minus 16 if R1 is equal to R2, a branch if equal R1 R2 minus 16, and similarly, for the branch if not equal. So, this family of instructions is adequate for many of the conditions that you will encounter, but if there are conditions which do not directly fall under the set of six comparisons, then one must find the way of achieving those conditions using these six instructions which is the programming challenge or the challenge to the compiler. So, the idea of the BEQ R1 R2 minus 16 is, that the branch will be taken if the contains of R1 are equal to the contains of R2, if the contains of these 2 general purpose registers are equal. Now, in terms of terminology, you should note that take the branch is the same as saying that the program counter will be modified to something that it is not just what it would have been if the branch are not been taken, and in terms of terminology, one often talks about the target address of a branch. The target address is the branch; I mean the address to which control will be transferred if the branch is in fact taken as indicated by the PC relative addressing mode. So, from now I will refer to target address of a branch or even the target address of a jump, because even for a jump, there is an address to which control will be transferred, and in general, we will refer to this as the target address, and so, this particular example the branch if less than 0 example, the target addresses PC plus 4 minus 16, and as I said we do need to understand what the plus 4 is. That will happen a little bit later. Now, in general, for the examples that we write, it will be somewhat inconvenient to write minus 16 and then, to actually count down in the program or up in the program to see which is the instruction, which is other particular address. So, rather than planning on doing that for writing examples in this course, I will rather go for the alternative of using a c like notation for the specification of the PC relative addressing mode, and I will explicitly just label instructions of the program, and instead of putting the PC relative address of the target into the instruction, I will just put the label corresponding to the target in the instruction. This would make a lot easier for us to read and understand code sequences in the rest of this course. So, I am postponing only one thing in this discussion and that is to explain to you in more detail with this plus 4 is coming from; otherwise, we have understood how these conditional branch is operate. So, let us move on to the jump instructions. So, in general, j stands for jump, and I had indicated that R stands for register. In general, jump is unconditional control transfer. So, we have two examples - jump to target and the target is specified in the instruction as a twenty six bit absolute. Remember that in our notation slide, I talked about target 26 is being a absolute, something which is specified absolute addressing mode. In other words, it is inside the instruction. The second example is an example of JR, I am sorry, it should be in R; here, JR R 5. So, what is these instructions do? The second instruction very clearly changes the value of the program counter, so, the program counter now contains whatever value is present in R5, but the first example is little bit more complicated as can be seen from the meaning. So, how is the program counter manipulated in the case of the first example? Now, the problem that we are seeing over here is as any jump instruction is an unconditional control transfer. In both these examples, the objective is to transfer control to the instruction at the target address, and the target address is specified using the absolute addressing mode. The target address itself has to be included inside the jump instruction. Since the target address is the size 32 bits. Since the target address is going to be the address of an instruction into conceivably has to be 32 bits in size, but obviously I cannot include a 32 bit target address inside a 32 bit instruction. We saw this problem in the previous lecture, and therefore, the MIPS 1 solution is that you allow to specify a 26 bit target address, and the hardware uses this 26 bit target address to construct the 32 bit address by adding two 0's as least significance bits of the final target address, and the remaining bits of the target address I just taken from the current value of the program counter. In other words, the value of the program counter associated with the jump instruction itself. So, this is the way that they overcome the problem of not being able to specify a 32 bit target address, and it may be a little bit restrictive in the range of addresses they can be used as targets for the jump instruction, but if that is the case, then one can always use the jump register instruction, in which case the 32 bit target address can be placed into the register so much for the jump instruction, and the jump register instruction. With this, we will move on to the next category of MIPS 1 control transfer instructions, the third line in our table. These are the jump and link instructions of which once again there are two, and now, we can read this as jump and link and this as jump and link register. So, what are these instructions to do? First of all let me just give you an example of each. Jump and link register we saw as in the case of the jump register instruction from the previous table would have a single operand which is a register operand. Whereas, the jump and link instruction has single operand which is a specified in absolute addressing mode and the meaning example over here applies to the first example not to the second example. Now, the meaning that we see over here is little bit interesting. This is the first instance that we are seeing of a MIPS 1 Instruction, in which, the meaning of one Instruction involved two operations. I could know all of the situations that we had the meaning was a single line or if you there was one situation where the characters would not fit on to a single line, but here there are two separate operations which are happening as part of the jump and link instruction; which means that this is the very special kind of an instruction, it does more than one simple thing, and we look at this in more detail. We see that it does two things - one of them is to manipulate R31. It puts PC plus 8 into R31, even though R31 is not an explicit operand of this instruction. And then, it modifies the program counter to now contains the value of R2. This is a somewhat curious sequence of instructions, and so, it seems to unconditionally transfer control to that, to that instruction at the target address. That is what the second step is doing. While at the same time, remembering PC plus 8 in register R31, which is what the first instruction is doing, which is what the first operation in the sequence was doing. So, this is the first example that was seeing in the MIPS 1 RISC Instruction set of an instruction which is doing something which is not a really primitive operation, because in order to describe this, I have to use two primitive operations, and therefore, this must be really important; otherwise, I could not have included it, is the deviation from the RISC. In some sense, it is a deviation from the RISC design principle of trying to have every each instruction doing only one single operation. So, as we will see, I believe in the next lecture, possibly the lecture after that, this instruction is critical for the implementation of function calls and that is why it had to be included, and how it is useful in the implementation of function calls, we will see shortly. Now, before actually moving to looking our programs, in general, I want you to just try to relate control transfer as you know it from your c programming experience to control transfer as we must view it now in the MIPS 1, well. Now, I was here, in this table, I have three examples of control transfer from the c, from your c world, there is a go to statement; there is an if then else construct, and there is a repeat loop. The repeat loop is one example of a loop. I could just as well I have included a while loop or a for loop, and after having gone through the repeat loop, you will see the, yourself will be able to fill up the table with additional entries for a while loop or for a for loop without any great difficulty. So, the purpose of this table is to give us a good understanding of what we can expect to see in a MIPS 1 program in place of a go to. In our original c program, if there was a go to, what can we expect to see in the MIPS 1 program? If you can, if in our original program, there was an if than else what can we expect to see in the MIPS 1 program and so on. So, let us start by thinking about the go to. So, go to in a c program is an example of a unconditional control transfer. Control is to be transfer to the statement which has that particular label unconditionally, no need to check any condition, and so, we suspect them in the MIPS world. The same effect is going to be achieved using an unconditional control transfer instruction. Therefore, either a jump instruction or a jump registers instruction. Since in our current terminology, I am using labels in my MIPS 1 programs, but we will actually see is that go to label will map into a jump instruction to the same label. Now, there is a remote possibility that you will recall, you will recall that under this notation, label will be specified using the target 26 absolute addressing mode inside the jump instruction. Therefore, as I had suggested in the slide on jump instructions, there is a possibility that if the target address of this jump. In other words, there are label, the statement which is label by the go to is very far away from the jump instruction. Then in may not be able to specify it appropriately using a 26 bit target address, and you could work out the kind of constraints which this places on your the distance between the go to statement in your c program and the label, and try to figure out if it is really that much of a constraint after all. Remember that this is 26 bits which is the actually a fair distance in terms of instructions even if each instruction is 4 bytes in size. So, the go to a label will simply map into a jump to a label in almost all cases that you will encounter. Next, let us consider the, if X greater than 0, the if then else construct. Now, I am using a generic kind of example here. In general, they will be some condition, I am using the example of the condition is X greater than 0, X is a variable. In the then part, they could be any number of statements; so, I am just indicating it by then part. In general, if there is no other one statement, you would have to enclose that by braces, and then, there is the else part. This could be once again any number of c statements. So, very clearly in the MIPS 1 equivalent, I will specify to this example, I will have to start by loading the variable X into a register, because in the MIPS 1 branch instructions, actually I have to use a branch instruction, a conditional branch instruction to check this condition. The MIPS 1 branch instructions all take their operands out of registers. Therefore, I will have to start by loading the value of this variable X into a register, then I will have to compare it is value to zero, and that I can do using one of those six branch instructions, and appropriately, the target of the branch will be a label which will relate to the then part of the else part. So, the way I am suggesting to work it out is as follows. I will start by getting the value of the variable X which is related to this condition into a register, let us say R1. Then in this particular case, I want to transfer control to the then part if X is greater than 0. I have a way to check if X is greater than 0 in the MIPS 1 instruction set using branch if greater than 0. So, if R1 is greater than 0, then I transfer control to then part, and I have a label called then part later in my program. What if R1 is not greater than 0? Then I want to transfer control to else part. Therefore, in this example, I have put my else part as a label immediately following the conditional branch instruction, and therefore, this sequence will achieve what was required by the if then else. Now, we need not be too concerned that then part appear before the else part in the c program, and in the MIPS program, the else part appears before the then part. That is just inconvenience if one is, that is not during that the inconvenience, because alternately one does not really read a typical c programmer, does not even read the machine language code. Finally we will go to the repeat loop, and I am again using a somewhat generic example. Repeat any number of statements which are inside the repeat loop until this condition becomes true; in other words, until x comes not equal to 0. Once again X is the name of a location in memory. It is the name of a variable. Therefore, I will have to start by loading X into a register. Subsequently, I can use a conditional transfer instruction which will transfer control to the beginning of the loop body. Therefore, the way this could be mapped is I have an instruction to load X into a register. Then I have the loop body and I have associated with the loop body a label called loop head. So that at the end of the loop body I check if r one is equal to 0, R1 contains the variable X. Therefore, if R1 is equal to 0, that means that this condition is not true, and therefore, I have to go back to loop head, and therefore, this achieves the effect of this conditional transfer of this loop. So, just note that even though the condition was not equal to 0, I ended up having to use a condition of equal to 0 because the not equal to 0 was the exit condition for the loop, and therefore, the continue condition for the loop was equal to 0. And note here once again that I am comparing R1 with 0 by using register R0. I have to do this because I am using the BEQ instruction which has two operands. Now, just to make sure that notation here is clear. Inside my c statement, there was a loop body which contained any number of c statements. Now, those c statements would get translated by the compiler into some collection of machine MIPS 1 instructions, and therefore, the number of instructions in the loop body will depend on the number of statements in the loop body over here and would depend how the compilation is done. But with this, we understand that essentially the importance c control transfer constructs that we are aware of can be achieved using the conditional and unconditional transfer instructions of the MIPS 1 instruction set. So, with this, we have actually seen the data transfer instructions, the arithmetic and logical instructions in the control transfer instructions, and we are ready to actually look at real examples of MIPS 1 programs. Now, as it happens if I had made available to you a MIPS 1 instruction set architecture manual, then you would have read about the details of each instruction separately. Each of these instructions would have been mentioned on a separate page and on the page they would have given you all the information that I have gone through over here but some additional information as well, and some of the additional information that would have been included in the in the instruction set manual would have been some relevance, and therefore, as important aside, I am going to mention two kinds of important comments that one might see in an instructions set architecture manual relating to some of the instructions that we have come across. So, I would label this slide as interesting notes from the instruction set architecture manual. Now, for some RISC Instruction sets, you find comments like this associated with load instructions in the instruction set architecture manual. The loaded value might not be available in the destination register for use by the instruction immediately following the load. This is obviously a comment which was included on the page of the load instruction as a warning to the programmer, because by default, the programmer might assume that if there is a load instruction that the value which is to be loaded into the destination register will become available as soon as the instruction has been executed, and therefore, the instruction immediately following the load instruction can definitely use the value in the destination register. This is an explicit warning that, that is not the case, and it is present in apparently in the instruction set architecture manual of many MIPS 1 processes, which is why I have included it here. If at this warning is so important that it is given a name, it is given the name of load delay slot - indicating that the programmer must be aware that load instructions have the delay associated with them. They do not cross the load destination register to become updated as soon as you would imagine. There is also this element of uncertainty. If you look at this description, it says that the loaded value might not be available. So, just think that in some implementations of the MIPS 1 Instruction set, the loaded value might be available, or for some programs when they execute for some reason, the MIPS the loaded value might be available, but in general, the programmer cannot assume that the loaded value will be available. So, this is one kind of warning which you might see, and other kind of warning which you might see and in instruction set architecture manual for a RISC like architecture like the MIPS 1, relates to control transfer instructions, and might say something like this - the transfer of control takes place only following the instruction immediately after the control transfer instruction. Now, typically you would expect that the transfer of control will take place as part of executing the control transfer instruction, and that therefore, the instruction following, the, could control transfer instruction is clearly covered. But here, we have explicit mention that the transfer of control takes place only after the instruction immediately after the control transfer instruction has been executed. Therefore, definitely a very important warning to a programmer particularly if he is program he or she is programming in this language, in the machine language, and once again, this is such an important warning that is available in on the page of the instruction set architecture manual, and it is given a name - the branch delay slot. I warning that control transfers are also delayed, they do not actually happen us fast as you would have thought. So, from the programmers perspective, what is this mean? Let us just try to understand what I as a programmer should do if I see these kinds of warning in a instruction set architecture manual. Let us start for with that warning about load instructions. So, the warning was something to do with when the destination will, when the loaded value will be available in the destination register. So, let us suppose that I have would written a program and there is a load word instruction. Remember, this is a load word instruction, you saw this not too long back. A load instruction in general causes a value to be copied from main memory, from avoidable in main memory into a register. In general, this is an instruction which will do such a load of a word size or 32 bit size quantity. The destination is to a register R1, and the source is specified as a base-displacement addressing mode, base-displacement addressing mode. The address of the memory operand is calculated by taking the contains of register R2 and adding to them the immediate constant which in this example is minus 8. So, this, this competition results in the address of the operand which can be, which can then be send to memory so that the data can be fetched. Now, if I have written a program which uses such an instruction. I may have also assume that, I mean typically I would have written such a this instruction in my program because I want to use the value of this variable out of the register R1. Therefore, it is not to uncommon that the next instruction in my program would be an instruction that uses R1. The next instruction in this particular example is an add instruction which adds the contents of R1 to the contents of R2 and puts the result into the register R3. Remember that in our notation, the destination register is written for us, and then, the two source registers. So, the add instruction that I have shown you here is a register that uses the value of R1, and it is possible that my intend in writing a program containing these two instructions one after the other was that I want, let us say a value to be loaded from memory, let suppose that minus 8 R2 corresponds to my variable X. So, I wanted the value of X to get loaded into the register R1 and then I wanted to use that value in the addition. That might have been why I did the load followed by the add. Then, I must take into account the warning, which tells me that for a load instruction, the loaded value might not be available in the destination register for use by the instruction immediately following load. Therefore, a clearly the situation that we have here is dangerous. There is no guarantee that the add instruction will get the value that was loaded. Therefore, I as a programmer must or as a compiler writer must suggest for this warning, and the way that I could do it is by making sure that the load instruction and the instruction following it which uses the load, the loaded value are not consecutive got separated. Therefore, example if I take a modified version of the same code segment where there is a same load instruction and the same add instruction, but I put between the 2 an additional instruction, such that the instruction in between does not use register R1 for it is own sake. Then I find out that the add instruction even under that the warning which had been provided in the instruction set architecture manual will safely receive the loaded value as loaded into R1 by the load instruction, and the program would gone as I had expected or as I had planned originally. So, this is the kind of warning which one must very clearly take into account as the machine language program or as a compiler writer, and in this particular example, the situation was that the instruction set architecture manual warned me that there was this 1 LOAD DELAY SLOT. That load instruction and the instruction using the loaded value have to be separated essentially by at least one instruction but that could have been a most serious warning, for example, that they are is a need for 2 LOAD DELAY SLOTS. In which case, I would actually I have to write a program in such a form that there were at least two instructions between load instruction and the next instruction using the loaded value, and the warning would have been written in a form that would be understandable to us in this slide. In a moving on from this to the next warning, the warning about control transfer instructions. Now this will relate to our program. I will have to give you an example where there is a use of a control transfer instruction. So, I will use the example which we have already seen the example of the repeat until. Remember that I implemented the repeat until using a label on the beginning of the loop body followed by a conditional branch instruction - the BEQ. The condition is related to register R1. Now, that, the situation that I have here relating to the warning, just, let us just remember what the warning was. The warning was for the conditional, for a control transfer instruction, they could transfer of control takes place only following the instruction immediately after the control transfer instruction. So, how does that relate to the code segment that we have over here? Now, according to that warning, the transfer of control to head, which is the control transfer that I am interested in, that is the correct implementation of the repeat until will not happen as a side effect, will not happen just after the branch instruction, but will happen after the next instruction in the program has been executed. In other words, the diagram that I should have in mind is something like this. Given the warning that we have just read from the MIPS 1 instruction set architecture manual. In other words, if I had written the program assuming that immediately after the branch, you people instruction control is transfer to head, so, it is by the dotted arrow, then I would have misunderstood what was happening. In fact, the statement immediately is following the branch whatever it is, this also part of the repeat loop. It will be executed along with loop body regardless of what else may be present at the program. Therefore, thanks to our warning from the instruction set architecture manual. I must account for my repeat until implementation by possibly taking the last instruction from loop body if that is correct and putting it after the branch if equal instruction. So, this is very important warning and it so important. That, in fact, it is refer to as the BRANCH DELAY SLOT even in the MIPS manual, and once again, it is conceivable that for other processors, you may see that there is a warning of 2 BRANCH DELAY SLOTs which essentially tells you that two instructions after the control transfer instruction will be executed before the control transfer takes place. So, these are clearly very important, and we will take both of these warnings into account. In fact, I think, for most of the examples, I will overcome that the assumption there is 1 BRANCH DELAY SLOT and 1 LOAD DELAY SLOT in writing the code. So, that this, this idea of having to be aware of warnings of this kind will sink in, and so, the a correct implementation would have head, loop body branch instruction, and then, the next instruction with realization that the control transfer happens only after the next instruction. Now, that we have seen all the MIPS instructions. We can actually move to the last part of the instruction set architecture manual which is information about what each instruction looks like. Now, remember, the instructions of your program as generated by the compiler, end up in main memory your program executes out of main memory and if the instructions are to be in main memory, they are going to be represented in binary, and therefore, up to now I have been talking about instructions, such as, branch equal using very readable notation. In reality, that instruction is going to be in the form of some binary sequence - 32 bit binary sequence - what is the 32bit binary sequence look like? That is what we learn in the portion of the instruction set architecture manual which tells us about the encoding of instructions. Now, the MIPS 1 instruction set has three different format for instructions - the first format or the first instruction format is what is call the R format and it describes the diagram that we have over here, describes what? A 32 bit instruction which is in the R format would look like. In other words, what the different bits from the least significant bit to the most significant bit are used for, and you notice this that they are many different fields in this instruction. There is an op code field which we suspect must be some encoding of the operation - op code probably stands for operation code. There is a field label SRC 1 which is specification of the first source operand, second source operand, destination operand and various other fields. Now, if you look at the size of each of these fields, we notice that the each of the source operand fields is a 5 bit field, what is it mean to have a 5 bit field? A 5 bit field means a five bit binary value can take on values between 0 and 31, and this is in consisting with our understanding that in MIPS 1 instructions, if there is a register direct operand, one has to specify the identity of that register. So, any register between R 0 and R31 can be specified using the appropriate 5 bit sequence. So, the fore most question you remind at this point will be what are the different instructions? Which will be encoded using the R format? Very clearly any of the arithmetic or logical instructions which does not use an immediate operand can be encoded using this format. For example, consider I add R1 R2 R3, so, how will the, add the fact that this is an add operation, how will that be encoded? That would be encoded using the op code bits as well as possibly using the last 6 bits which are label this function code. To the, there are 12 bits available for encoding that the fact this is an add instruction. The fact that the destination operand is R1 and that the source operands are R2 and R3 would be encoded using the SRC 1 field, SRC 2 field and the destination field. So, what exactly would one see in the SRC 1 field for this particular example? This is a 5 bit field. What I want to see over there is an indication that R2 is the first source operand, and therefore, I would see the value 2 in that five bit field. What would the value 2 look like in the 5 bit field? It would look like 0 0 0 1 0. So, that is the exactly what I would expect to see in that 5 bit field. Similarly, for R1 over here I would expect to see 0 0 0 0 1, and for R3 I would expect to see 0 0 0 1 1. So, ultimately, when all the bit fields are filled up, we could understand exactly what the add R1 R2 R3 instruction looks like in 32 bits. You can find out what the different fields are, the different values, the exact values of the op code then the function code bits for the add instruction by looking up the appropriate section in the MIPS 1 Instruction set architecture manual if you needed to. So, basically the arithmetic and logical instructions which use only destination operands and source operands could be encoded using the R format. The second MIPS 1 Instruction set format is the I format or immediate format, because it has a 16 bit field inside the instruction. In addition, there is the 6 bit op code field, a 5 bit source operand field and a 5 bit destination operand field, and thus would imagine any instruction of the MIPS 1 instruction set which has a 16 bit displacement or immediate value can be encoded using this format. In fact, will be encoded using this format, and they are several examples. For example, the arithmetic and logical instructions which have immediate operands. The immediate operand is the 16 bit field, requires the 16 bit field. In this particular case, it requires only a 3 bit field, but in general, the immediate operand would be a signed 16 bit value. Therefore, the operation would be encoded in the op code field. The destination register would be encoded in the destination field. The first operand which is the source operand in register direct addressing would be encoded in the SRC 1 field, and the 16 bit immediate operand would be encoded in the constant field. So, in this particular example, I would see the value 8 in 2's complement in the 16 bit field. In other words, I would see a lot of 0's followed by 0 0, I mean lot of 0's followed by 1 1 1 0 0 0 which is the 16 bit 2's complement value, a representation of 8. What other instructions would be encoded using the I format? Think about the load instructions or the store instructions. All of these instructions have their operand specified in base, I am sorry, base-displacement addressing mode. They have to specify a base register and destination and displacement is the 16 bit sign displacement. So, once again, this I format could be used for this purpose. The fact that this is the load word could be encoded in the op code field, the destination can be encoded, I am sorry, the displacement can be encoded in the constant field. The base register can be encoded in the S1 SRC 1 field and the destination in the destination field. Anything else? Yes, even the conditional branch instructions can be encoded using this format. Recall that for the case of the conditional branches, the target is specified as a PC relative displacement, and the size of the displacement is 16 bits. So, once again there is the situation where the 16 bit field can be used for this purpose. The op code would be used to indicate that this is branch if less than 0 field. The R1 would be encoded in the source 1 field, and the 16 bit PC relative displacement in the constant field. So, many different instructions actually find the encoding using the I format. There is 1 other format in the MIPS 1 instruction set and that is the J format. The J format has a large 26 bit field, and as you would suspect that this is necessary for the instructions which have a target 26 absolute addressing mode of which there are a few - there is the jump instruction and there is the jump and link instruction. So, for example, if there was a jump and link instruction in your program, in our notation I will represent the target by a label, but in the instruction, the label would have to be encoded by it is actual twenty 6 bit address using the mechanism that we had seen in the slide about control transfer instructions. So, with this, it turns out that only these three formats are necessary, and all the instructions that we have seen and other instructions which we have not seen like the system call instruction etcetera can all be encoded using these three formats. With this, we are actually in a position. We have, we can move forward to start looking at programs and in order to do this, I just wanted to remind you a little bit about what happens when you write a c program and how it moves towards being a program in the machine language. I will quickly recall something from one of the earliest lectures. Remember that when you write a program c, you have to actually compile it using a step called the compilation step, and that the net result if doing this is that your c program file which was the file containing your program, which you typed into a file using a text editor or something like that was compile translated into an equivalent program and the default name of the output file of gcc is a dot out. So, the output of gcc is the file called a dot out. It is the file containing and executable equivalent program in machine language. In other words, it could be it is a machine that you are working with is the MIPS 1 is a MIPS 1 machine, then this would be an equivalent program in the MIPS 1 machine language. Now, we saw that these translation happens to a series of steps. So, your program dot c goes through cpp, then you goes to cc1, and along the way a temporary file call hello dot, I am sorry, this would be program dot s was created which was used by one of the other steps, and other temporary file called program dot o was created which was merge with other files to generate an a dot out. We saw this in earlier lecture. In this lecture what I would like to point out is, that is, there are these different files - program dot c, program dot s, program dot o, library file a dot out. All of which are potentially available to you to look at as I had mentioned earlier. Now, some of these files are going to be easy to look at. For example, you are, you can always look a program dot c, and you may be able to find a mechanism, by which, you can look a program dot s. Now, both program dot c and program dot s are text files, and you will be able to look at these files; you can read them or write them using a text editor. Whatever text editor you are used to using, whatever mechanism you use to could a the additional program dot c. On the other hand, the other files that I have mentioned on the slide - program dot o a dot out and the library files - are not text files, but they are object files; in other words, they are binary files. They contain information which you cannot easily edit using a text file that this not mean that you cannot write programs to open those files to read them or write them, but you would have to go through the effort of actually writing a program to open and read an a dot out file, our program dot o file or it have to use something else like and octal dump program to do the same thing. So, some of these files that you encounter in the process of compilation or friendly and easy to read. You could therefore actually take a program find the way by which c c c would give you program dot s, and you could even edit - read and write and edit - the program dot s file and then pass it to the rest of the steps of compilation. So, with this thought in mind, we will move forward in the next lecture towards actually seeing what happens to a c program in terms of ending up in an a dot out file which contains MIPS 1 instructions. Thank you. ", "transcription_base": " Welcome to lecture 6 of a high performance computing. We are in the process of looking at the MIPS 1 instruction set in some detail. In the previous lecture we saw that the MIPS instruction set is a 32 bit in other words 32 bit word size load store in other words the only instructions that take memory operands are the load and store instructions and it is a risk instruction set. We saw that the 32 general purpose registers are 0 through R31 of which R0 always contains the value 0 and R31 is implicitly used by some instructions. So, there are two other additional registers called high and low. We saw the addressing modes which are used by the instructions and also looked in detail at two of the categories of instructions. The data transfer instructions, in other words loads, stores and moves to and from the registers high and low as well as the arithmetic and logical instructions. We are in the process of actually looking at the table of arithmetic and logical instructions. Let me remind you that there are add and subtract instructions which can also take immediate operands such as the second example. And the logical operations also have the possibility of immediate operands. The immediate are all signed 16 bit quantities which may have to be sign extended if they are to be operated on with 32 bit operands. All that remains to be understood in this table are the multiply and divide instructions, which have the oddity that they are the only instructions in the mixed one instructions set, which do not seem to have a destination operand specified, which we need to decipher. So let me just go through an example here. Here is an example of a multiply instruction, multiply R1, R2. So we suspect that R1 and R2 are the source operands of this instruction and that what the instruction is supposed to do is to take the 32 bit value out of R1, multiply it by the 32 bit value out of R2 and the only question is what is it supposed to do with the result? Now the first thing we need to understand is that the result is conceivably larger than 32 bits. The result could in fact be as large as 64 bits and I guess all of you, when you may have an understanding of this from the decimal equivalent. If I have two, let us suppose I am talking about decimal. If I have two digit numbers and I multiply them together, you know that the result is in this case three digits, but in general could be as high as 4 digits. So, when I multiply 2 digit numbers, I could get a number which is as big as 4 digits. When I multiply 2 digit numbers, the result could be less than 4 digits. Which is why in this comment up here, I mentioned that when I multiply 2 32 bit values, the result could be as large as 64 bits. This is a problem for the MIPS instructions set, you will remember that the size of all the general purpose registers is only 32 bits. And that is why when I do a multiplication of two register values, there is no guarantee that the result can fit into one of the general purpose registers. And this is where the high and low come into the picture. So what happens in the case of the MIPS1 instruction set is that the result of the multiplication in other words the product is put into the registers high and low. And as the name suggests, the these significant bits of the product go into the register low and the most significant 32 bits of the product go into the register high. And you know that both high and low from the previous lecture what I told you both high and low are 32 bit registers. And therefore the full 32 bits of the product can be stored and made available to the program. Now typically you would expect that the program will have to check to see whether high is equal to 0 and if i is equal to 0 then we know that the full product is contained within the register low and proceed accordingly. Now, what about the divide instruction? The divide instruction like the add instruction has only two operands for example, I might divide r 1 by r 2. But in the case of divide you know that the result is a quotient but But there is also another result which is the remainder. How big could the quotient be? There is no problem with the size of the quotient. There is no problem with the size of the remainder. Both of them can only be as large as 32 bits. The only question is where should those two go. So, this if I had to divide instruction in the MIPS1 instruction set, the problem would then be, how do I specify two destination operands? In other words, one destination operand for the quotient, the other destination operand for the remainder in a single instruction. And therefore, once again to bypass this problem, they use the high and low registers. The quotient goes into low and the remainder goes into high. And this is basically all we need to know about the multiplying divide instructions. So, in effect to do a multiplication or a division, one can use a single instruction, but there will be follow up instructions to transfer the result, whether it be the quotient or the 32 bits of the product into one of the general purpose registers for the subsequent computation by the program. Now, with this we have actually completed our quick look at the MIPS 1 arithmetic and logical instructions and will now move on to the control transfer instructions. In some sense this is the largest and the most important category of instructions because these are the instructions which are going to be used to create the control flow of the C program and is the part of programming which is probably the most challenging. Now, as you will expect in this table I would pre want you that I am going to have to talk separately about the conditional branch instructions or branch instructions, the unconditional jump instructions, a family of instructions which are going to be used to implement function calls which are called the jump and link instructions and some other instructions. I have actually introduced the system call instruction at the bottom and I am not going to say anything about it today, but we will come back to this instruction after 7 or 8 lectures well into the course. So, please do remember that I talked about the system call instruction and with that I will not talk about it any more today. Now, in terms of the terminology used in this table, one piece of terminology that is used is the letter R is used in this table to stand for register. In some of the notation I have used target sub 26 and target sub 26 is referring to an absolute operand of 26 bit size. Some of the instructions have a Z in the and mnemonic and z stands for 0. And the only other notation which I have used here is that in this particular meanings column I have used a two pipe operator by which you can understand that I mean the operation, catenate or concatenate whichever you are more used to. So with this quick overview of the terminology we can try to understand these instructions one by one. So let me just remind you the conditional branch instructions or branch instructions and B stands for branch in all of these instructions will be used to transfer control of the program depending on whether a particular condition is true or not as opposed to the jump or unconditional branch instructions which transfer control unconditionally. So, if you look at this particular example, B L T Z R 2 minus 16 and look at the explanation it looks quite mysterious. So, just think that I need to explain this in a little bit more detail. So, let me separately talk about the conditional branch instructions. So, we now have the same example branch if less than 0 and the meaning of it. But let me just go through these mnemonics one by one. So we have B eq, we need to be able to read this in a slightly more, friendlier fashion than B eq, B and E, B g e z, etcetera, etcetera. We know that z stands for 0 and we expect that eq stands for equal, that the condition which is being tested as a test for equality. Using that as a hint we would test that n e stands for not equal, g e stands for greater than or equal L e for less than or equal L t for less than and g t for greater than which means that the kinds of conditions that are being tested by these instructions are equal which you understand in C by the equal equal operator N e which is not equal greater than or equal to 0, remember that the z at the end of b g e z stands for 0 less than or equal to 0 less than 0 and greater than 0. So, these mnemonics suggest that these are the conditions which are being tested whether two things are equal whether they are not equal whether something is greater than or equal to 0 less than or equal to 0 less than 0 or greater than 0. From this example we understand that for the comparison with 0 instructions The value inside a register is being compared with 0. So, this instruction basically means branch if less than 0 R 2 which you could read as branch if R 2 is less than 0 2 minus 16. Now, in the lead up to this in the in the previous lecture in talking about the different addressing modes used by the MIPS1 instruction set, I had indicated that the the branch instructions use a PC relative addressing mode. And in the case of a branch instruction, the first operand is very clearly being specified in the register direct addressing mode. And therefore, the PC relative addressing mode must be referring to the second operand, this minus 16, which is why we are not too surprised to see in the meaning column, that the meaning of the minus 16 is that the way that the program counter is changed. In other words, the way that the control flow of the program is modified is by taking the whole value of the program counter and subtracting 16 from it. At this point, we are not too sure why this plus 4 is there. That would become clearer when we talk about how the hardware to implement this instruction set could be implemented. But for the moment, we must understand that this is the meaning of the instruction. That if I execute an instruction branch of less x 10 0 r 2 minus 16. The meaning is that if the current value of the register r 2 is less than 0, then the program counter will be modified. In other words, there will be a change of control to the program counter of the branch instruction plus 4 minus 16 and the minus 16 is because of the fact that this is PC relative addressing. So, something similar is going to happen for all of the instructions which compare with 0. But what about the instructions that do not compare with 0? In other words, the branch of equal and the branch of not equal. Once again, if there is a condition which is being checked and the check of equality is being done, then two things must be being checked for equality. So, the question is what does the B E Q instruction look like? And the answer is that in the case of the B q and the B and E instruction, there are actually two operands. In fact, an example would be B e q r 1 r 2 minus 16 and this instruction basically reads branch to minus 16 if r 1 is equal to r 2, a branch of equal r 1 r 2 minus 16 and similarly for the branch of not equal. So, this family of instructions is adequate for many of the conditions that you will encounter, but if there are conditions which do not directly fall under this set of 6 comparisons, then one must find a way of achieving those conditions using these 6 instructions, which is the programming challenge or the challenge to the compiler. So, the idea of the BQ R1 R2 minus 16 is that the branch will be taken if the contents of R1 are equal to the contents of R2, if the contents of these 2 general purpose registers are equal. Now, in terms of terminology, you should note that take the branch is the same as saying that the program counter will be modified to something that is not just what it would have been if the branch had not been taken. And in terms of terminology, one often talks about the target address of a branch. The target address is the branch, the address to which control will be transferred if the branch is in fact taken as indicated by the PC relative addressing mode. So, from now on I will refer to target address of a branch or even the target address of a jump because even for a jump there is an address to which control will be transferred and in general we will refer to this as the target address. So, in this particular example the branch of less than 0 example that the target address is PC plus 4 minus 16 and as I said we do need to understand what the plus 4 is that will happen a little bit later. Now, in general for the examples that we write it will be somewhat inconvenient to write minus 16 and then to actually count down in the program or up in the program to see which is the instruction which is at that particular address. So, rather than planning on doing that for writing examples in this course I will rather go for the alternative of using a C like notation for the specification of the PC relative addressing mode. And I will explicitly just label instructions of the program and instead of putting the PC relative address of the target into the instruction, I will just put the label corresponding to the target in the instruction. This will make it a lot easier for us to read and understand code sequences in the rest of this course. So I am postponing only one thing in this discussion and that is to explain to you in more detail where this plus 4 is coming from otherwise we have understood how these conditional branches operate. So, let us move on to the jump instructions. So, in general J stands for jump and I had indicated that R stands for register. general jump is unconditional control transfer. So, we have two examples jump to target and the target is specified in the instruction as a 26 bit absolute. Remember that in our notation slide I talked about target 26 as being a absolute something which is specified absolute addressing mode. In other words, it is inside the instruction. The second example is an example of Jr, I am sorry, it should be in R here, Jr R5. So, what do these instructions do? The second instruction very clearly changes the value of the program counter, so that program counter now contains whatever values present in R5. But the first example is little bit more complicated as can be seen from the meaning. So, how is the program counter manipulated in the case of the first example? Now, the And the problem that we are seeing over here is, as any jump instruction is an unconditional control transfer. In both these examples, the objective is to transfer control to the instruction at the target address. And the target address is specified using the absolute addressing mode. The target address itself has to be included inside the jump instruction. Since the target address is of size 32 bits, I am sorry, since the target address is going to be the address of an instruction, it would considerably have to be 32 bits in size. But obviously, I cannot include a 32 bit target address inside a 32 bit instruction. We saw this problem in the previous lecture. And therefore, the MIPS1 solution is that you can specify your allow to specify a 26 bit target address. And the hardware uses this 26 bit target address to construct a 32 bit address by adding two zeros as the least significant bits of the final target address and the remaining bits of the target address are just taken from the current value of the program counter. In other words, the value of the program counter associated with the jump instruction itself. Right? So, this is the way that they overcome the problem of not being able to specify a 32-bit target address and it may be a little bit restrictive in the range of addresses that can be used as targets for the jump instruction, but if that is the case then one can always use the jump register instruction in which case a 32 bit targeted risk can be placed into the register. So much for the jump instruction and the jump register instruction. With this we will move on to the next category of MIPS 1 control transfer instructions, the third line in our table. These are the jump and link instructions of which once again there are two and now we can read this as jump and link and this as jump and link register. So what do these instructions do? First of all let me just give you an example of each jump and link register as in the case of the jump register instruction from the previous table would have a single operand which is a register operand whereas the jump and link instruction has a single operand which is a specified in absolute addressing mode. And the meaning example over here applies to the first example, not to the second example. Now, the meaning that we see over here is little bit interesting. This is the first instance that we are seeing of a MIPS1 instruction in which the meaning of one instruction involved two operations. But you know all of the situations that we had the meaning was a single line or if there was one situation where the characters would not fit onto a single line, but here there are two separate operations which are happening as part of the jump and link instruction, which means that this is a very special kind of an instruction. It does more than one simple thing and if you look at this in more detail we see that it does two things one of them is to manipulate R 31 it puts PC plus 8 into R31 even though R31 is not an explicit operand of this instruction and then it modifies the program counter to now contain the value of R2. This is somewhat curious sequence of instructions. So, it seems to unconditionally transfer control to the instruction at the target address that is what the second step is doing while at the same time remembering PC plus 8 in register R31 which is what the first instruction is doing which is what the first operation in the sequence was doing. So, this is the first example that we are seeing in a the MIPS1 risk instruction set of an instruction which is doing something which is not a really primitive operation because in order to describe this I had to use two primitive operations and therefore, this must be really important otherwise they would not have included it. It is a deviation from the risk in some sense it is a deviation from the risk design principle of trying to have each instruction doing one single operation. So, as we will see I believe in the next lecture or possibly the lecture after that, this instruction is critical for the implementation of function calls and that is why it had to be included and how it is useful in the implementation of function calls we will see shortly. Now before actually moving to looking at programs in general, I wanted to just try to relate control transfer as you know it from your C programming experience to control transfer as we must view it now in the MIPS1 world. Now here in this table I have three examples of control transfer from the C from your C world. There is a go to statement, there is an if then else construct and there is a repeat loop. The repeat loop is one example of a loop. I could just as well have included a while loop or a four loop and after having gone through the repeat loop you will see that you yourself will be able to fill up the table with additional entries for a while loop or for a four loop without any great difficulty. So, the purpose of this table is to give us a good understanding of what we can expect to see in a MIPS 1 program in place of a go-to. In our original C program if there was a go-to, what can we expect to see in the MIPS 1 program. If in our original program there was an if then else, what can we expect to see in the MIPS 1 program and so on. So, let us start by thinking about the go-to. So, go-to in a C program is an example of a unconditional control transfer. Control is to be transferred to the statement which has that particular label unconditionally, no need to check any condition. And so, we suspect that in the MIPS world, the same effect is going to be achieved using an unconditional control transfer instruction. Therefore, either a jump instruction or a jump register instruction. Since in our current terminology, I am using labels in my MIPS 1 programs, but we will actually see is that go to label will map into a jump instruction to the same label. Now, there is a remote possibility that you will recall that under this notation label will be specified using the target 26 absolute addressing mode inside the jump instruction. Therefore, as I had suggested in the slide on jump instructions there is a possibility that if the target address of this jump in other words the label, the statement which is label by the go to is very far away from the jump instruction, then it may not be able to specify it appropriately using a 26 bit target address. And you can work out the kind of constraints which this places on the distance between the go to statement in your C program and the label and try to figure out if it is really that much of a constraint after all. remember that this is 26 bits which is actually a fair distance in terms of instructions, even if each instruction is 4 bytes in size. So, the go to a label will simply map into a jump to a label in almost all cases that you will encounter. Next let us consider the if x greater than 0, if then else construct. Now, I am using a generic kind of example here. In general there would be some condition, I am using the example of the condition is x greater than 0, x is a variable. In the then part there could be any number of statements. So, I am just indicating it by then part. In general if there is more than one statement you have to enclose that by braces and then there is the else part which could be once again any number of c statements. So, very clearly in the MIPS1 equivalent I will specific to this example. I will have to start by loading the variable x into a register. Because in the MIPS1 branch instructions, I clearly have to use a branch instruction, a conditional branch instruction to check this condition. The MIPS1 branch instructions all take their operands out of registers. Therefore, I will have to start by loading the value of this variable x into a register. Then I will have to compare its value to 0 and that I can do using one of those 6 branch instructions. appropriately the target of the branch will be a label which will relate to the then part or the else part. So, the way I am suggesting to work it out is as follows. I will start by getting the value of the variable x which is related to this condition into a register let us say r 1. Then in this particular case I want to transfer control to the then part if x is greater than 0, I have a way to check if x is greater than 0 in the MIPS1 instruction set using branch of greater than 0. So, if r 1 is greater than 0, then I transfer control to then part and I have a label called then part later in my program. What if r 1 is not greater than 0, then I want to transfer control to else part. Therefore, in this example, I have put my else part as a label immediately following the conditional branch instruction and therefore, this sequence will achieve what was required by the if then else. Now, we need not be too concerned that the then part appeared before the else part in the C program and in the MIPS program the else part appears before the then part that is just inconvenience if one is that does not even in inconvenience because ultimately one does not really read typical C program it does not even read the machine language code. Now, finally, we will go to the repeat loop and I am again using a somewhat generic example repeat any number of statements which are inside the repeat loop until this condition becomes true in other words until x becomes not equal to 0. Once again x is the name of a location in memory it is the name of a variable therefore, I will have to start by loading x into a register. Subsequently, I can use a conditional transfer instruction which will transfer control to the beginning of the loop body. Therefore, the way that this could be mapped is I have an instruction to load x into a register, then I have the loop body and I have associated with the loop body a label called loop head. So, that at the end of the loop body I check if R 1 is equal to 0, R 1 contains a variable x. Therefore, if R 1 is equal to 0 that means that this condition is not true And therefore, I have to go back to loop head and therefore, this achieves the effect of this conditional transfer of this loop. So, just know that even though the condition was not equal to 0, I ended up having to use a condition of equal to 0 because the not equal to 0 was the exit condition for the loop and therefore, the continue condition for the loop was equal to 0. And note here once again that I am comparing R 1 with 0 by using register R0, I had to do this because I am using the BEQ instruction which has two operands. Now just to make sure that the notation here is clear, inside my C statement there was a loop body which contained any number of C statements. Now those C statements would get translated by the compiler into some collection of machine MIPS1 instructions. And therefore the number of instructions in the loop body will depend on the number of statements in the loop body over here and we depend on how the compilation is done. But with this we understand that essentially the important C control transfer constructs that we are aware of can be achieved using the conditional and unconditional transfer instructions of the MIPS1 instruction set. So with this we have actually seen the data transfer instructions, the arithmetic and logical instructions and the control transfer instructions and we You are ready to actually look at real examples of MIPS1 programs. Now as it happens, if I had made available to you a MIPS1 instruction set architecture manual, then you would have read about the details of each of these instructions separately. Each of these instructions would have been mentioned on a separate page and on the page they would have given you all the information that I have gone through over here, but some additional information as well. some of the additional information that would have been included in the instruction set manual would have been of some relevance. And therefore, as important aside I am going to mention two kinds of important comments that one might see in an instruction set architecture manual relating to some of the instructions that we have come across. So, I have labeled this slide as interesting notes from the instruction set architecture manual. Now for some risk instruction sets, you find comments like this associated with load instructions in the instruction set architecture manual. The loaded value might not be available in the destination register for use by the instruction immediately following the load. This is obviously a comment which was included on the page of the load instruction as a a warning to the programmer. Because by default, the programmer might assume that if there is a load instruction that the value which is to be loaded into the destination register will become available as soon as the instruction has been executed and therefore the instruction immediately following the load instruction can definitely use the value in the destination register. This is an explicit warning that that is not the case and it is present in apparently in the instructions at architecture manual of many MIPS 1 processors which is why I have included it here. In fact, this warning is so important that it is given a name. It is given the name of load delay slot indicating that the programmer must be aware that the load instructions have a delay associated with them. They do not cause the destination register to become updated as soon as you would imagine. There is also this element of uncertainty. If you look at this description, it says that the loaded value might not be available. So, just in that in some implementations of the MIPS1 instruction set, the loaded value might be available or for some programs when they execute for some reason, the MIPS, the loaded value might be available. But in general, the programmer cannot assume that the loaded value will be available. So, this is one kind of warning which you might see. Another kind of warning which you might see in an instruction set architecture manual for a risk like architectural like the MIPS1 relates to control transfer instructions and might say something like this. The transfer of control takes place only following the instruction immediately after the control transfer instruction. Now, typically you would expect that the transfer of control will take place as part of executing the control transfer instruction and therefore, the instruction following the control transfer instruction is clearly covered. But here we have explicit mention that the transfer of control takes place only after the instruction immediately after the control transfer instruction has been executed. Therefore, definitely a very important warning to a programmer particularly if he is programming he or she is programming in this language in the machine language. In once again this is such an important warning that is available on the page of the instructions at architecture manual and it is given a name the branch delay slot. A warning that control transfers are also delayed. They do not actually happen as fast as you would have thought. So from the programmer's perspective what does this mean? Let us just try to understand what I as a programmer should do if I see these kinds of warnings in a instruction set architecture manual. Let us start first with that warning about load instructions. So the warning was something to do with when the loaded value will be available in the destination register. So let us suppose that I have written a program and there is a load word instruction. Remember this is the load word instruction, we saw this not too long back. A load instruction in general causes a value to be copied from main memory, from a variable in main memory into a register. In general this is an instruction which will do such a load of a word size or 32 bit size quantity. The destination is to a register R1 and the source is specified as a base displacement, addressing mode. to source when addressing mode the address of the memory operand is calculated by taking the contents of register R2 and adding to them the immediate constant which in this example is minus 8. So, this computation results in the address of the operand which can be, which can then be sent to memory so that the data can be fetched. Now, if I have written a program which uses such an instruction, I might have also assumed that I mean typically I would have written such a this instruction in my program because I want to use the value of this variable out of the register R1. The first not too uncommon that the next instruction in my program would be an instruction that uses R1. The next instruction in this particular example is an ad instruction which adds the contents of R1 to the contents of R2 and puts the result into the register R3. Remember that in our notation the destination register is written first and then the two source registers. So the add instruction that I have shown you here is a register that uses the value of R1 and it is possible that my intent in writing a program containing these two instructions one after the other was that I want let us say a value to be loaded from memory let us suppose that minus 8 R2 corresponds to my variable x. So I want to the value of x to get loaded into the register R1 and then I want to use that value in the addition that might have been why I did the load followed by the add. Then I must take into account the warning which tells me that for a load instruction the loaded value might not be available in the destination register for use by the instruction immediately following the load. Therefore, clearly the situation that we have here is dangerous. There is no guarantee that the add instruction will get the value that was loaded. Therefore, I as a programmer must or as a compiler writer must adjust for this warning. And the way that I could do it is by making sure that the load instruction and the instruction following it which uses the loaded value are not consecutive but separated. So for example if I take a modified version of the same code segment where there is the same load instruction and the same add instruction but I put between the two an additional instruction such that the instruction in between does not use register R1 for its own sake, then I find out that the add instruction even under that the warning which had been provided in the instructions at architecture manual will safely receive the loaded value as loaded into R1 by the load instruction and the program would run as I had expected or as I had planned originally. So this is the kind of warning which one must very clearly take into account as a machine language program or as a compiler writer. And in this particular example, the situation was that the instructions set architecture manual warned me that there was this one load delays slot that load instruction and the instruction using the loaded value have to be separated essentially by at least one instruction. But there could have been more serious warning for example, that there is a need for two load delays slots in which case I would actually have had to write a program in such a form that there were at least two instructions between the load instruction and the next instruction using the loaded value. And the warning would have been written in a form that would be understandable to us in this light. Now, moving on from this to the next warning, the warning about control transfer instructions. Now, this will relate to a program, I will have to give you an example where there is use of a control transfer instruction. So, I will use an example which we have already seen, the example of the repeat until. Remember that I implemented the repeat until using a label on the beginning of the loop body followed by a conditional branch instruction, the B EQ. The condition is related to register R1. Now the situation that I will have here relating to the warning, just let us just remember what the warning was. The warning was for a conditional for a control transfer instruction the transfer of control takes place only following the instruction immediately after the control transfer instruction. So how does that relate to the code segment that we have over here. Now, according to that warning, the transfer of control to head which is the control transfer that I am interested in that is the correct implementation of the repeat until will not happen as a side effect will not happen just after the branch instruction but will happen after the next instruction in the program has been executed. In other words, the diagram that I should have in mind is something like this right given the warning that we had just read from the MIPS1 instruction set architecture manual. In other words, if I had written the program assuming that immediately after the branch of equal instruction control is transferred to head such as by the dotted arrow, then I would have misunderstood what was happening. In fact, the statement immediately following the branch whatever it is is also part of the repeat loop. It will be executed along with loop body regardless of what else may be present in the program. Therefore, thanks to our warning from the instructions of architecture manual, I must account for my repeat until implementation by possibly taking the last instruction from loop body if that is correct and putting it after the branch if equal instruction. So this is very important warning and it is so important that in fact it is referred to as the branch delay slot even in the MIPS manual. And once again it is conceivable that for other processors you may see that there is a warning of two branch delay slots which essentially tells you that two instructions after the control transfer instruction will be executed before the control transfer takes place. So, these are clearly very important and we will take both of these warnings into account. In fact, I think for most of the examples I will work under the assumption that there is one branch delay slot and one load delay slot in writing the code. So, that this idea of having to be aware of warnings of this kind will sink in. So, the correct implementation would have head loop body branch instruction and then the next instruction with the realization that the control transfer happens only after the next instruction. Now, that we have seen all the MIPS instructions, we can actually move to the last part of the instruction set architecture manual which is information about what each instruction looks like. Now remember the instructions of your program as generated by the compiler end up in main memory your program executes out of main memory and if the instructions are to be in main memory they are going to be represented in binary. And therefore, up to now I have been talking about instructions such as branch which equal using very readable notation. In reality that instruction is going to be in the form of some binary sequence, a 32 bit binary sequence. What is the 32 bit binary sequence look like? That is what we learn in the portion of the instruction set architecture manual which tells us about the encoding of instructions. Now the MIPS one instruction set has three different formats for instructions. The first form at or the first instruction format is what is called the R format and it describes the diagram that we have over here describes what a 32 bit instruction which is in the R format would look like in other words what the different bits from the least significant bit to the most significant bit are used for. And you notice that there are many different fields in this instruction there is an op code field which we suspect must be some encoding of the operation, op code probably stands for operation code. There is a field labeled SRC1 which is a specification of the first source operand, second source operand, destination operand and various other fields. Now, if we look at the size of each of these fields, we notice that the each of the source operand fields is a 5 bit field. What does it mean to have a 5 bit field, a 5 bit field means a 5 bit binary value can take on values between 0 and 31. And this is consistent with our understanding that in MIPS 1 instructions, if there is a register direct operand, one has to specify the identity of that register. So any register between R 0 and R 31 can be specified using the appropriate 5 bit sequence. So the foremost question in your mind at this point will be what are the different instructions which will be encoded using the R format. Very clearly any of the arithmetic or logical instructions which does not use an immediate operand can be encoded using this format. For example consider add R1, R2, R3. So how will the add the fact that this is an add operation how will that be encoded that would be encoded using the off code bits as well as possibly using the last six bits which are able to function code. So, there are 12 bits available for encoding the fact that this is an add instruction. The fact that the destination operand is R1 and that the source operands are R2 and R3 would be encoded using the SRC1 field, SRC2 field and the destination field. So, what exactly would one see in the SRC one field for this particular example, this is a 5 bit field, what I want to see over there is an indication that R2 is the first source operand and therefore I would see the value 2 in that 5 bit field, what would the value 2 look like in that 5 bit field, it would look like 0, 0, 0, 1, 0. So that is exactly what I would expect to see in that 5 bit field. Similarly for R1 over here I would expect to see 0, 0, 0, 0, 1 and for R3 I would expect to see 0, 0, 0, 1, 1. So ultimately when all the bit fields are filled up we could understand exactly what the add R1, R2, R3 instruction looks like in 32 bits. You can find out what the different fields are, the different values, the exact values of the opcode and the function code bits for the add instruction by looking up the appropriate section in the MIPS1 instruction set architecture manual if you needed to. So, basically the arithmetic and logical instructions which use only destination operands and source operands could be encoded using the R format. The second MIPS1 instruction set format is the i format or immediate format because it has a 16 bit field inside the instruction. In addition there is a 6 bit op code field, a 5 bit source operand field and a 5 bit destination operand field. And as you would imagine any instruction of the mid-finan instruction set which has a 16 bit displacement or immediate value can be encoded using this format. In fact will be encoded using this format. And there are several examples. For example, the arithmetic and logical instructions which have immediate operands. The immediate operands is a 16 bit field requires a 16 bit field. In the particular case it requires only a 3 bit field, but in general the immediate operand would be assigned 16 bit value. Therefore, the operation would be encoded in the opcode field. The destination register would be encoded in the destination field the first operand which is the source operand in register direct addressing mode would be encoded in the S R C 1 field and the 16 bit immediate operand would be encoded in the constant field. So, in this particular example I would see the value 8 in 2's complement in this 16 bit field. In other words I would see a lot of 0's followed by 0 0 I mean lot of 0's followed by 1 1 1 0 0 0 which is the 16 bit 2's complement value or representation of 8. What other instructions would be encoded using the I format. Think about the load instructions or the store instructions. All of these instructions have their operand specified in base displacement addressing mode. They have to specify a base register and a destination and a displacement and the displacement is a 16 bit sign displacement. So, once again this I format could be used for this purpose. The fact that it is load word could be encoded in the op-code field, the destination can be encoded, I am sorry, the displacement can be encoded in the constant field, the base register can be encoded in the S R C 1 field and the destination in the destination field. Anything else? Yes, even the conditional branch instructions can be encoded using this format. Recall that for the case of the conditional branches, the target is specified as a PC relative displacement. And the size of the displacement is 16 bits. So, once again there is a situation where the 16 bit field can be used for this purpose. The op code would be used to indicate that this is branch of less than 0 field. The R 1 would be encoded in the source 1 field and the 16 bit PC relative displacement in the constant field. So, many different instructions actually find there encoding using the i format. There is one other format in the MIPS 1 instruction set and that is the j format. The j format has a large 26 bit field and as you suspect this is necessary for the instructions which have a target 26 absolute addressing mode of which we know there are a few. There is the jump instruction and there is a jump and link instruction. So, for example, if there was a jump and link instruction in your program. In our notation, I will represent the target by a label, but in the instruction the label would have to be encoded by its actual 26-bit address using the mechanism that we had seen in the slide about control transfer instructions. So, with this it turns out that only these three formats are necessary and all the instructions that we have seen and other instructions which we have not seen like the system call instruction, etcetera can all be encoded using these three formats. With this we are actually in a position where we can move forward to start looking at programs and in order to do this I just wanted to remind you a little bit about what happens when you write a C program and how it moves towards being a program in the machine language I will quickly recall something from one of our earlier lectures. Remember that when you write a program in C, you have to actually compile it using a step called the compilation step. And that the net result of doing this is that your C program file, which was a file containing your program, which you typed into a file using a text editor or something like that, was compiled translated into an equivalent program. And the default name of the output file of GCC is A dot out. So, the output of GCC is the file called A dot out. It is a file containing an executable equivalent program in machine language. In other words, it could be if the machine that you are working with is the MIPS 1, is a MIPS 1 machine, then this would be an equivalent program in the MIPS 1 machine language. Now we saw that this translation happens through a series of steps. So, your program program g.c goes to cpp then it goes to cc1 and along the way a temporary file called hello.s this would be program.s was created which was used by one of the other steps another temporary file called program.o was created which was merged with other files to generate an a dot out. We saw this in an earlier lecture. In this lecture what I would like to point out is that there are these different files program.c program.s program.o library file A dot out, all of which are potentially available to you to look at as I had mentioned earlier. Now some of these files are going to be easy to look at. For example, you can always look at program dot C and you may be able to find a mechanism by which you can look at program dot S. Now both program dot C and program dot S are text files and you will be able to look at these files, you can read them or write them using a text editor, whatever text editor you are used to using, whatever mechanism you used to create the original program.c. On the other hand, the other files that I have mentioned on this slide, program.o, a.m. out and the library files are not text files, but they are object files. In other words, they are binary files. They contain information which you cannot easily edit using a text file. That does not mean that you cannot write programs to open those files to read them or write them. But you would have to go through the Therefore, of actually writing a program to open and read an A.out file or a program.offile or you have to use something else like an octal dump program to do the same thing. So some of these files that you encounter in the process of compilation are friendly and easy to read. You could therefore actually take a program find a way by which CCC would give you program.s and you could even edit, read and write and edit the program.s file and then pass it through the rest of the steps of compilation. So, with this thought in mind, we will move forward in the next lecture towards actually seeing what happens to a C program in terms of ending up in an 8 out of file which contains MIPS1 instructions. Thank you.", "transcription_medium": " Welcome to lecture 6 of a high performance computing. We are in the process of looking at the MIPS 1 instruction set in some detail. In the previous lecture, we saw that the MIPS instruction set is a 32 bit, in other words, 32 bit word size. Load store, in other words, the only instructions that take memory operands are the load and store instructions and it is a risk instruction set. We saw that there are 32 general purpose registers R 0 through R 31 of which R 0 is always contains the value 0 and R 31 is implicitly used by some instructions. We saw that there are two other additional registers called high and low. We saw the addressing modes which are used by the instructions and also looked in detail at two of the categories of instructions, the data transfer instructions, in other words, loads, stores and moves to and from the registers high and low, as well as the arithmetic and logical instructions. We are in the process of actually looking at the table of arithmetic and logical instructions. Let me remind you that there are add and subtract instructions which can also take immediate operands such as the second example and the logical operations also have the possibility of immediate operands. The remediates are all signed 16 bit quantities which may have to be sign extended if they are to be operated on with 32 bit operands. All that remains to be understood in this table are the multiply and divide instructions, which have the oddity that they are the only instructions in the MIPS 1 instruction set, which do not seem to have a destination operand specified, which we need to decipher. So, let me just go through an example here. Here is an example of a multiply instruction, multiply R 1, R 2. So, we suspect that R 1 and R 2 are the source operands of this instruction and that what the instruction is supposed to do is to take the 32 bit value out of R 1, multiply it by the 32 bit value out of R 2 and the only question is what is it supposed to do with the result. Now, the first thing we need to understand is that the result is conceivably larger than 32 bits. Then result could in fact be as large as 64 bits and I guess all of you, many of you may have an understanding of this from the decimal equivalent. If I have two, let us suppose I am talking about decimal, if I have two two digit numbers and I multiply them together, you know that the result is in this case three digits, but in general could be as high as four digits. So, when I multiply two two digit numbers, I could get a number which is as big as four digits. When I multiply two two digit numbers, the result could be less than four digits, which That is why in this comment up here, I mentioned that when I multiply to 32 bit values, the result could be as large as 64 bits. This is a problem for the MIPS instruction set, because you will remember that the size of all the general purpose registers is only 32 bits. And that is why, when I do a multiplication of two register values, there is no guarantee that the result can fit into one of the general purpose registers. And this is where the high and low come into the picture. So, what happens in the case of the MIPS 1 instruction set is that the result of the multiplication, in other words, the product is put into the registers high and low. And as the name suggests, the least significant bits of the product go into the register low and the most significant 32 bits of the product go into the register high. And you know that both high and low from the previous lecture, I told you, both high and low are 32 bit registers and therefore, the full 32 bits of the product can be stored and made available to the program. Now, typically you would expect that the program will have to check to see whether high is equal to 0 and if high is equal to 0, then we know that the full product is contained within the register low and proceed accordingly. Now, what about the divide instruction? The divide instruction like the add instruction has only two operands. For example, I might divide R 1 by R 2. But in the case of divide, you know that the result is a quotient, but there is also another result which is the remainder. How big could the quotient be? We know there is no problem with the size of the quotient. There is no problem with the size of the remainder. Both of them can only be as large as 32 bits. The only question is where should those two go? So, this if If I had a divide instruction in the MIPS 1 instruction set, the problem would then be how do I specify two destination operands? In other words, one destination operand for the quotient, the other destination operand for the remainder in a single instruction. And therefore, once again to bypass this problem, they use the high and low registers. The quotient goes into low and the remainder goes into high. And this is basically all we need to know about the multiply and divide instructions. So, in effect, to do a multiplication or a division, one can use a single instruction, but there will be follow up instructions to transfer the result, whether it be the quotient or the 32 bits of the product into one of the general purpose registers for the subsequent computation by the program. Now, with this, we have actually completed our quick look at the MIPS 1 arithmetic and logical instructions and will now move on to the control transfer instructions. In some sense this is the largest and most important category of instructions because these are the instructions which are going to be used to create the control flow of the C program and is the part of programming which is probably the most challenging. Now, as you would expect in this table I had pre warned you that I am going to have to talk separately about the conditional branch instructions or branch instructions, the unconditional jump instructions, a family of instructions which are going to be used to implement function calls which are called the jump and link instructions and some other instructions. I have actually introduced the system call instruction at the bottom and I am not going to say anything about it today, but we will come back to this instruction after 7 or 8 lectures well into the course. Therefore, please do remember that I talked about the system call instruction and with that I will not talk about it any more today. Now, in terms of the terminology used in this table, one piece of terminology that is used is the letter R is used in this table to stand for register. In some of the notation, I have used target sub 26 and target sub 26 is referring to an absolute operand of 26 bit size. Some of the instructions have a Z in their mnemonic and Z stands for 0. The only other notation which I have used here is that in this particular meanings column, I have used a two pipe operator by which you can understand that I mean the operation catenate or concatenate whichever you are more used to. So, with this quick overview of the terminology, we can try to understand these instructions one by one. So, let me just remind you the conditional branch instructions or branch instructions and B stands for branch in all of these instructions will be used to transfer control of the program depending on whether a particular condition is true or not as opposed to the jump or unconditional branch instructions which transfer control unconditionally. So, if you look at this particular example B L T Z R 2 minus 16 and look at the explanation it looks quite mysterious. So, just think that I need to explain this in a little bit more detail. So, let me separately talk about the conditional branch instructions. So, we now have the same example, branch if less than 0 and the meaning of it, but let me just go through these mnemonics one by one. So, we have B E Q, we need to be able to read this in a slightly friendlier fashion than B E Q, B N E, B G E Z etcetera. We know that z stands for 0 and we expect that E q stands for equal that the condition which is being tested as a test for equality. Using that as a hint, we would guess that n e stands for not equal, g e stands for greater than or equal, l e for less than or equal, l t for less than and g t for greater than, which means that the kinds of conditions that are being tested by these instructions are equal, which you understand in C by the equal operator n e, which is not equal greater than or equal to 0. Remember that the z at the end of B G E z stands for 0 less than or equal to 0 less than 0 and greater than 0. So, these mnemonic suggest that these are the conditions which are being tested, whether two things are equal, whether they are not equal, whether something is greater than or equal to 0, less than or equal to 0, less than 0 or greater than 0. From this example, we understand that for the comparison with 0 instructions, the value inside a register is being compared with 0. So, this instruction basically means, branch if less than 0 R 2, which you could read as branch if R 2 is less than 0 to minus 16. Now, in the lead up to this in the previous lecture in talking about the different addressing modes used by the MIPS 1 instruction set, I had indicated that the branch instructions use a PC relative addressing mode. And in the case of a branch instruction, the first operand is very clearly being specified in the register direct addressing mode and therefore, the PC relative addressing mode must be referring to the second operand this minus 16, which is why we are not too surprised to see in the meaning column that the meaning of the minus 16 is that the way that the program counter is changed. In other words, the way that the control flow of the program is modified is by taking the old value of the program counter and subtracting 16 from it. At this point we are not too sure why this plus 4 is there, that will become clearer when we talk about how the hardware to implement this instruction set could be implemented. But for the moment we must understand that this is the meaning of the instruction, that if I execute an instruction branch of less than 0 R 2 minus 16, the meaning is that if the current value of the register R 2 is less than 0, then the program counter will be modified. In other words, there will be a change of control to the program counter of the branch instruction plus 4 minus 16 and the minus 16 is because of the fact that this is PC relative addressing. So, something similar is going to happen for all of the instructions which compare with 0, but what about the instructions that do not compare with 0? In other words, the branch if equal and the branch if not equal. Once again, if there is a condition which is being checked and the check of equality is being done, then two things must be being checked for equality. So, the question is what is the BEQ instruction look like? And the answer is that in the case of the BEQ and the BNE instruction, there are actually two operands. In fact, an example would be BEQ R 1 R 2 minus 16. And this instruction basically reads branch to minus 16 if R 1 is equal to R 2, a branch if equal R 1 R 2 minus 16 and similarly, for the branch if not equal. So, this family of instructions is adequate for many of the conditions that you will encounter, but if there are conditions which do not directly fall under this set of six comparisons, then one must find a way of achieving those conditions using these six instructions, which is the programming challenge or the challenge to the compiler. So, the idea of the BEQ R 1 R 2 minus 16 is that the branch will be taken if the contents of R 1 are equal to the contents of R 2. If the contents of these two general purpose registers are equal. Now, in terms of terminology, you should note that take the branch is the same as saying that the program counter will be modified to something that is not just what it would have been if the branch had not been taken. And in terms of terminology, one often talks about the target address of a branch. The target address is the branch, I mean the address to which control will be transferred if the branch is in fact taken, as indicated by the PC relative addressing mode. So, from now on, I will refer to target address of a branch or even the target address of a jump, because even for a jump there is an address to which control will be transferred and in general we will refer to this as the target address. So, in this particular example, the branch of less than 0 example, the target address is PC plus 4 minus 16 and as I said we do need to understand what the plus 4 is that will happen a little bit later. Now, in general for the examples that we write, it will be somewhat inconvenient to write minus 16 and then to actually count down in the program or up in the program to see which is the instruction which is at that particular address. So, rather than planning on doing that for writing examples in this course, I will rather go for the alternative of using a C like notation for the specification of the PC relative addressing mode and I will explicitly just label instructions of the program and instead of putting the the PC relative address of the target into the instruction, I will just put the label corresponding to the target in the instruction. This will make it a lot easier for us to read and understand code sequences in the rest of this course. So, I am postponing only one thing in this discussion and that is to explain to you in more detail where this plus 4 is coming from, otherwise we have understood how these conditional branches operate. So, let us move on to the jump instructions. So, in general, J stands for jump and I had indicated that R stands for register. In general, jump is unconditional control transfer. So, we have two examples, jump to target and the target is specified in the instruction as a 26 bit absolute. Remember that in our notation slide, I had talked about target 26 as being a absolute, something which is specified absolute addressing mode. In other words, it is inside the instruction. Second example is an example of JR, I am sorry, it should be an R here, JR R 5. So, what do these instructions do? The second instruction very clearly changes the value of the program counter, so that program counter now contains whatever value is present in R 5, but the first example is little bit more complicated as can be seen from the meaning. So, how is the program counter manipulated in the case of the first example? Now, the problem that we are seeing over here is, as any jump instruction is an unconditional control transfer, in both these examples, the objective is to transfer control to the instruction at the target address, right. And the target address is specified using the absolute addressing mode. The target address itself has to be included inside the jump instruction. Since the target address is of size 32 bits, I am sorry, since the target address is going to be the address of an instruction, it will conceivably have to be 32 bits in size. But obviously, I cannot include a 32 bit target address inside a 32 bit instruction. We saw this problem in the previous lecture. And therefore, the MIPS 1 solution is that you can specify, you are allowed to specify a 26 bit target address and the hardware uses this 26 bit target address to construct a 32 bit address by adding two zeros as the least significant bits of the final target address and the remaining bits of the target address are just taken from the current value of the program counter. In other words, the value of the program counter associated with the jump instruction itself. So, this is the way that they overcome the problem of not being able to specify a 32 bit target address and it may be a little bit restrictive in the range of addresses that can be used as targets for the jump instruction. But if that is the case, then one can always use the jump register instruction, in which case a 32 bit target address can be placed into the register. So, much for the jump instruction and the jump register instruction. With this, we will move on to the next category of MIPS 1 control transfer instructions, the third line in our table. These are the jump and link instructions of which once again there are two and now we can read this as jump and link and this as jump and link register. So, what do these instructions do? First of all, let me just give you an example of each jump and link register. As in the case of the jump register instruction from the previous table would have a single operand, which is a register operand, whereas the jump and link instruction has a single operand, which is a specified in absolute addressing mode. And the meaning example over here applies to the first example not to the second example. Now, the meaning that we see over here is little bit interesting. This is the first instance that we are seeing of a MIPS 1 instruction in which the meaning of 1 instruction involved two operations. Until now, all of the situations that we had the meaning was a single line or if there was one situation where the characters would not fit on to a single line, but here there are two separate operations which are happening as part of the jump and link instruction, which means that this is a very special kind of an instruction. It does more than one simple thing and if we look at this in more detail, we see that it does two things. One of them is to manipulate R 31. It puts PC plus 8 into R 31 even though R 31 is not an explicit operand of this instruction and then it modifies the program counter to now contain the value of R 2. This is somewhat curious sequence of instructions. So, it seems to unconditionally transfer control to the instruction at the target address. That is what the second step is doing. While at the same time remembering PC plus 8 in register R 31, which is what the first instruction is doing, which is what the first operation in the sequence was doing. So, this is the first example that we are seeing in the MIPS 1 risk instruction set of an instruction, which is doing something, which is not a really primitive operation, because in order to describe this, I had to use two primitive operations and therefore, this must be really important, otherwise I would not have included it. It is a deviation from the risk, in some sense it is a deviation from the risk design principle of trying to have every each instruction doing only one single operation. So, as we will see, I believe in the next lecture, possibly the lecture after that, this instruction is critical for the implementation of function calls and that is why it had to be included and how it is useful in the implementation of function calls, we will see shortly. Now, before actually moving to looking at programs in general, I wanted to just try to relate control transfer as you know it from your C programming experience to control transfer as we must view it now in the MIPS 1 world. Now, here in this table, I have three examples of control transfer from your C world. There is a go to statement, there is an if then else construct and there is a repeat loop. The repeat loop is one example of a loop. I could just as well have included a while loop or a for loop and after having gone through the repeat loop, you will see that you yourself will be able to fill up the table with additional entries for a while loop or for a for loop without any great difficulty. So, the purpose of this table is to give us a good understanding of what we can expect to see in a MIPS 1 program in place of a goto. In our original C program, there was a goto, what can we expect to see in the MIPS 1 program? If in our original program there was an if then else, what can we expect to see in the MIPS 1 program and so on. So, let us start by thinking about the goto. So, goto in a C program is an example of a unconditional control transfer. Control is to be transferred to the statement which has that particular label unconditionally, no need to check any condition. And so, we suspect that in the MIPS world, the same effect is going to be achieved using an unconditional control transfer instruction. Therefore, either a jump instruction or a jump register instruction. Since, in our current terminology, I am using labels in my MIPS 1 programs. What we will actually see is that, goto label will map into a jump instruction to the same label. Now, there is a remote possibility that you You will recall that under this notation, label will be specified using the target 26 absolute addressing mode inside the jump instruction. Therefore, as I had suggested in the slide on jump instructions, there is a possibility that if the target address of this jump, in other words, the label, the statement which is labeled by the go to is very far away from the jump instruction, then it may not be able to specify it appropriately using a 26 bit target address. And you can work out the kind of constraints which this places on the distance between the goto statement in your C program and the label and try to figure out if it is really that much of a constraint after all. Remember that this is 26 bits which is actually a fair distance in terms of instructions, even if each instruction is 4 bytes in size. So, the goto to a label will simply map into a jump to a label in almost all cases that you will encounter. Next, let us consider the if X greater than 0, the if then else construct. Now, I am using a generic kind of a example here. In general, there will be some condition. I am using the example of the condition is X greater than 0, X is a variable. In the then part, there could be any number of statements. So, I am just indicating it by then part. In general, there is more than one statement, you would have to enclose that by braces and then there is the else part, which could be once again any number of C statements. So, very clearly in the MIPS 1 equivalent, I will specific to this example, I will have to start by loading the variable x into a register, because in the MIPS 1 branch instructions, I clearly have to use a branch instruction, a conditional branch instruction to check this condition. The MIPS 1 branch instructions all take their operands out of registers. Therefore, I will have to start by loading the value of this variable X into a register. Then I will have to compare its value to 0 and that I can do using one of those 6 branch instructions. And appropriately, the target of the branch will be a label which will relate to the then part or the else part. So, the way I am suggesting to work it out is as follows. I will start by getting the value of the variable x which is related to this condition into a register let us say r 1. Then in this particular case, I want to transfer control to the then part if x is greater than 0. I have a way to check if x is greater than 0 in the MIPS 1 instruction set using branch is greater than 0. So, if r 1 is greater than 0, then I transfer control to then part and I have a label called then part later in my program. What if R 1 is not greater than 0? Then I want to transfer control to else part. Therefore, in this example, I have put my else part as a label immediately following the conditional branch instruction. And therefore, this sequence will achieve what was required by the if then else. Now, we need not be too concerned that the then part appear before the else part in the C program and in the MIPS program, the else part appears before the then part. That is just inconvenience. If one is, that is not even an inconvenience because ultimately one does not really read typical C program, it does not even read the machine code, machine language code. Now, finally, we will go to the repeat loop and I am again using a somewhat generic example. state any number of statements which are inside the repeat loop until this condition becomes true. In other words, until x becomes not equal to 0. Once again, x is the name of a location in memory, it is the name of a variable. Therefore, I will have to start by loading x into a register. Subsequently, I can use a conditional transfer instruction which will transfer control to the beginning of the loop body. Therefore, the way that this could be mapped is, I have an instruction to load x into a register, then I have the loop body and I have associated with the loop body a label called loop head. So, that at the end of the loop body, I check if r 1 is equal to 0, r 1 contains the variable x. Therefore, if r 1 is equal to 0, that means that this condition is not true and therefore, I have to go back to loop head and therefore, this achieves the effect of this conditional transfer of this loop. So, just note that even though the condition was not equal to 0, I ended up having to use a condition of equal to 0, because the not equal to 0 was the exit condition for the loop and therefore, the continue condition for the loop was equal to 0. Note here once again that I am comparing R 1 with 0 by using register R 0. I had to do this because I am using the BEQ instruction which has two operands. Now, just to make sure that the notation here is clear, inside my C statement, there was a loop body which contained any number of C statements. Now, those C statements would get translated by the compiler into some collection of machine MIPS 1 instructions. And therefore, the number of instructions in the loop body will depend on the number of statements in the loop body over here and would depend on how the compilation is done. But with this, we understand that essentially the important control transfer constructs that we are aware of can be achieved using the conditional unconditional transfer instructions of the MIPS 1 instruction set. So, with this, we have actually seen the data transfer instructions, the arithmetic and logical instructions and the control transfer instructions and we are ready to actually look at real examples of MIPS 1 programs. Now, as it happens, if I had made available to you a MIPS 1 instruction set architecture manual, then you would have read about the details of each of these instructions separately. Each of these instructions would have been mentioned on a separate page and on the page they would have given you all the information that I have gone through over here, but some additional information as well. And some of the additional information that would have been included in the instruction set manual would have been of some relevance. And therefore, as important aside, I am going to mention two kinds of important comments that one might see in an instruction set architecture manual relating to some of the instructions that we have come across. So, I have labeled this slide as interesting notes from the instruction set architecture manual. Now, for some risk instruction sets, you find comments like this associated with load instructions in the instruction set architecture manual. The loaded value might not be available in the destination register for use by the instruction immediately following the load. This is obviously a comment which was included on the page of the load instruction as a warning to the programmer, because by default the programmer might assume that if there is a load instruction that the value which is to be loaded into the destination register will become available as soon as the instruction has been executed and therefore, the instruction immediately following the load instruction can definitely use the value in the destination register. This is an explicit warning that that is not the case and it is present in apparently in the instruction set architecture manual of many MIPS 1 processors, which is why I have included it here. In fact, this warning is so important that it is given a name. It is given the name of load delay slot indicating that the programmer must be aware that load instructions have a delay associated with them. They do not cause the destination register to become updated as soon as you would imagine. There is also this element of uncertainty. If you look at this description, it says that the loaded value might not be available suggesting that in some implementations of the MIPS 1 instruction set, the loaded value might be available or for some programs when they execute for some reason the MIPS the loaded value might be available. But in general, the programmer cannot assume that the loaded value will be available. So, this is one kind of warning which you might see. Another kind of warning which you might see in an instruction set architecture manual for a RISC like architecture like the MIPS 1 relates to control transfer instructions and might say something like this. The transfer of control takes place only following the instruction immediately after the control transfer instruction. Now, typically you would expect that the transfer of control will take place as part of executing the control transfer instruction and therefore, the instruction following the control transfer instruction is clearly covered, but here we have explicit mention that the transfer of control takes place only after the instruction immediately after the control transfer instruction has been executed. Therefore, definitely a very important warning to a programmer particularly, he or she is programming in this language, in the machine language. And once again, this is such an important warning that is available in on the page of the instruction set architecture manual and is given a name, the branch delay slot, a warning that control transfers are also delayed. They do not actually happen as fast as you would have thought. So, from the programmer's perspective, what does this mean? Let us just try to understand what I as a programmer should do, if I see these kinds of warnings in a instruction set architecture manual. Let us start first with that warning about load instructions. So, the warning was something to do with when the destination value, when the loaded value will be available in the destination register. So, let us suppose that I have written a program and there is a load word instruction. Remember, this is a load word instruction. We saw this not too long back. A load instruction in general causes a value to be copied from main memory, from a variable in main memory into a register. In general, this is an instruction which will will do such a load of a word size or 32 bit size quantity. The destination is to a register R 1 and the source is specified as a base displacement addressing mode. The address of the memory operand is calculated by taking the contents of register R 2 and adding to them the immediate constant which in this example is minus 8. So, this computation results in the address of the operand which can then be sent to memory, so that the data can be Now, if I have written a program which uses such an instruction, I might have also assumed that I mean typically I would have written such a this instruction in my program because I want to use the value of this variable out of the register R 1. Therefore, it is not too uncommon that the next instruction in my program would be an instruction that uses R 1. The next instruction in this particular example is an add instruction which adds the contents of R 1 to the contents of R 2 and puts the result into the register R 3. Remember that in our notation, the destination register is written first and then the two source registers. So, the add instruction that I have shown you here is a register that uses the value of R 1 and it is possible that my intent in writing a program containing these two instructions one after the other was that I want, let us say, a value to be loaded from memory. Let suppose that minus 8 R 2 corresponds to my variable X. So, I wanted the value of X to get loaded into the register R 1 and then I wanted to use that value in the addition. That might have been why I did the load followed by the add. Then I must take into account the warning which tells me that for load instruction, the loaded value might not be available in the destination register for use by the instruction immediately following the load. Therefore, clearly the situation that we have here is dangerous. There is no guarantee that the add instruction will get the value that was loaded. Therefore, I as a programmer must or as a compiler writer must adjust for this warning. And the way that I could do it is by making sure that the load instruction and the instruction following it which uses the load the loaded value are not consecutive, but separated. So, for example, if I take a modified version of the same code segment, where there is a same load instruction and the same add instruction, but I put between the two an additional instruction such that the instruction in between does not use register R 1 for its own sake. Then I find out that the add instruction even under that the warning which had been provided in the instruction set architecture manual will safely receive the loaded value as loaded into R 1 by the load instruction and the program would run as I had expected or as I had planned originally. So, this is the kind of warning which one must very clearly take into account as a machine language program or as a compiler writer. And in this particular example, the situation was that the instruction set architecture manual warned me that there was this one load delay slot that load instruction and the instruction using the loaded value have to be separated essentially by at least one instruction, but there could have been a more serious warning for example, that there is a need for two load delay slots, in which case I would actually have had to write a program in such a form that there were at least two instructions between the load instruction and the next instruction using the loaded value. The warning would have been written in a form that would be understandable to us in this slide. Now, moving on from this to the next warning, the warning about control transfer instructions. Now, this will relate to a program, I will have to give you an example where there is use of a control transfer instruction. So, I will use the example which we have already seen the example of the repeat until. Remember that I implemented the repeat until using a label on the beginning of the loop body followed by a conditional branch instruction the BEQ. The condition is related to register R 1. Now, the situation that I will have here relating to the warning, just let us just remember what the warning was. The warning was for a conditional for a control transfer instruction, the transfer of control takes place only following the instruction immediately after the control transfer instruction. So, How does that relate to code segment that we have over here? Now, according to that warning the transfer of control to head, which is the control transfer that I am interested in that is the correct implementation of the repeat until will not happen as a side effect will not happen just after the branch instruction, but will happen after the next instruction in the program has been executed. In other words, the diagram that I should have in mind is something like this, given the warning that we had just read from the MIPS 1 instruction set architecture manual. In other words, if I had written the program assuming that immediately after the branch of equal instruction control is transferred to head, such as by the dotted arrow, then I would have misunderstood what was happening. In fact, the statement immediately following the branch, whatever it is, is also part of the repeat loop. It will be executed along with loop body regardless of what else may be present in the program. Therefore, thanks to our warning from the instruction set architecture manual, I must account for my repeat until implementation by possibly taking the last instruction from loop body if that is correct and putting it after the branch if equal instruction. So, this is very important warning and it is so important that in fact, it is referred to as the branch delay slot even in the MIPS manual. And once again, it is conceivable that for other processors, you may see that there is a warning of two branch delay slots, which essentially tells you that two instructions after the control transfer instruction will be executed before the control transfer takes place. So, these are clearly very important and we will take both of these warnings into account. In fact, I think for most of the examples, I will work under the assumption that there is one branch delay slot and one load delay slot in writing the code. So, that this idea of having to be aware of warnings of this kind will sink in. So, the correct implementation would have head loop body branch instruction and then the next instruction with realization that the control transfer happens only after the next instruction. Now, that we have seen all the MIPS instructions, we can actually move to the last part of the instruction set architecture manual, which is information about what each instruction looks like. Now, remember the instructions of your program as generated by the compiler end up in main memory, your program executes out of main memory and if the instructions are to be in main memory, they are going to be represented in binary. Therefore, up to Even now, I have been talking about instructions such as branch equal using very readable notation. In reality, that instruction is going to be in the form of some binary sequence, a 32 bit binary sequence. What is the 32 bit binary sequence look like? That is what we learn in the portion of the instruction set architecture manual, which tells us about the encoding of instructions. Now, the MIPS 1 instruction set has three different formats for instructions. The first format or the first instruction format is what is called the R format and it describes the diagram that we have over here, describes what a 32 bit instruction which is in the R format would look like. In other words, what the different bits from the least significant bit to the most significant bit are used for and you notice that there are many different fields in this instruction. There is an opcode field which we suspect must be some encoding of the operation. Opcode probably stands for operation code. There is a field labeled SRC 1, which is specification of the first source operand, second source operand, destination operand and various other fields. Now, if we look at the size of each of these fields, we notice that the each of the source operand fields is a 5 bit field. What does it mean to have a 5 bit field? A 5 bit field mean 5 bit binary value can take on values between 0 and 31. And this is consistent with our understanding that in MIPS 1 instructions, if there is a register direct operand, one has to specify the identity of that register. So, any register between R 0 and R 31 can be specified using the appropriate 5 bit sequence. So, the foremost question in your mind at this point will be, what are the different instructions which will be encoded using the R format. Very clearly, any of the arithmetic or logical instructions which does not use an immediate operand can be encoded using this format. For example, consider add R 1, R 2, R 3. So, how will the add, the fact that this is an add operation, how will that be encoded? That would be encoded using the op code bits as well as possibly using the last 6 bits which are labeled as function code. So, there are 12 bits available for encoding the fact that this is an add instruction. The fact that the destination operand is R 1 and that the source operands are R 2 and R 3 would be encoded using the SRC 1 field, SRC 2 field and the destination field. So, what exactly would one see in the SRC 1 field for this particular example? This is a 5 bit field, what I want to see over there is an indication that R 2 is the first source operand. Therefore, I would see the value 2 in that 5 bit field. What would the value 2 look like in that 5 bit field? It would look like 0 0 0 1 0. So, that is exactly what I would expect to see in that 5 bit field. Similarly, for R 1 over here, I would expect to see 0 0 0 0 1 and for R 3 I would expect to see 0 0 0 1 1. So, ultimately when all the bit fields are filled up, we could understand exactly what the add R 1 R 2 R 3 instruction looks like in 32 bits. You can find out what the different fields are, the different values, the exact values of the opcode and the function code bits for the add instruction by looking up the appropriate section in the MIPS 1 instruction set architecture manual, if you needed to. So, basically the arithmetic and logical instructions which use only destination operands and source operands could be encoded using the R format. The second MIPS 1 instruction set format is the I format or immediate format, because it has a 16 bit field inside the instruction. In addition, there is a 6 bit opcode field, a 5 bit source operand field and a 5 bit destination operand field. And as you would imagine, any instruction of the MIPS 1 instruction set which has a 16 bit displacement or immediate value can be encoded using this format. In fact, will be encoded using this format. And there are several examples. For example, the arithmetic and logical instructions which have immediate operands. The immediate operands is a 16 bit field, requires a 16 bit field. In this particular case, it requires only a 3 bit field, but in general, the immediate operand would be assigned 16 bit value. Therefore, the operation would be encoded in the opcode field, the destination register would be encoded in the destination field, the first operand which is the source operand in register direct addressing mode would be encoded in the SRC 1 field and the 16 bit immediate operand would be encoded in the constant field. So, in this particular example, I would see the value 8 in 2's complement in this 16 bit field. In other words, I would see a lot of 0's followed by 0 0, I mean lot of 0's followed by 1 1 1 0 0 0, which is the 16 bit 2's complement value representation of 8. What other instructions would be encoded using the I format? Think about the load instructions or the store instructions. All of these instructions have their operand specified in base, I am sorry, base displacement addressing mode. They have to specify a base register and a destination and a displacement and the displacement is a 16 bit sign displacement. So, once again this I format could be used for this purpose. The fact that it is load word could be encoded in the opcode field, the destination can be encoded, I am sorry, the displacement can be encoded in the constant field, the base register can be encoded in the S R C 1 field and the destination in the destination field. Anything else? Yes, even the conditional branch instructions can be encoded using this format. Recall that for the case of the conditional branches, the target is specified as a PC relative displacement and the size of the displacement is 16 bits. So, once again there is a situation where the 16 bit field can be used for this purpose. The opcode would be used to indicate that and this is branch of less than 0 field. The R 1 would be encoded in the source 1 field and the 16 bit PC relative displacement in the constant field. So, many different instructions actually find their encoding using the I format. There is one other format in the MIPS 1 instruction set and that is the J format. The J format has a large 26 bit field and as you suspect, this is necessary for the instructions which have a target 26 absolute addressing mode of which we know there are a few. There is the jump instruction and there is a jump and link instruction. So, for example, if there was a jump and link instruction in your program, in our notation, I will represent the target by a label, but in the instruction, the label would have to be encoded by its actual 26 bit address using the mechanism that we had seen in the slide about control transfer instructions. So, with this it turns out that only these three formats are necessary and all the instructions that we have seen and other instructions which we have not seen like the system call instruction etcetera can all be encoded using these three formats. this we are actually in a position where we can move forward to start looking at programs and in order to do this I just wanted to remind you a little bit about what happens when you write a C program and how it moves towards being a program in the machine language. And so I will quickly recall something from one of our earliest lectures. Remember that when you write a program in C you have to actually compile it using a step called the compilation step and that the net result of doing this is that your C program file, which was a file containing your program, which you typed into a file using a text editor or something like that, was compiled translated into an equivalent program and the default name of the output file of GCC is A dot out. So, the output of GCC is the file called A dot out. It is a file containing an executable equivalent program in machine language. In In other words, it could be if the machine that you are working with is the MIPS 1 is a MIPS 1 machine, then this would be an equivalent program in the MIPS 1 machine language. Now, we saw that this translation happens through a series of steps. So, your program program dot C goes to C p p, then it goes to C c 1 and along the way a temporary file called hello dot I am sorry this should be program dot S was created, which was used by one of the other steps, another temporary file called program dot o was created, which was merged with other files to generate an a dot out. We saw this in an earlier lecture. In this lecture, what I would like to point out is, that is, there are these different files program dot c, program dot s, program dot o, library file, a dot out, all of which are potentially available to you to look at, as I had mentioned earlier. Now, some of these files are going to be easy to look at. For example, you are, you can always look at program dot C and you may be able to find a mechanism by which you can look at program dot S. Now, both program dot C and program dot S are text files and you will be able to look at these files you can read them or write them using a text editor whatever text editor you are used to using whatever mechanism you used to create the original program dot C. On the other hand, the other files that I have mentioned on this slide program dot O a dot out and the library files are not text files, but they are object files. In other they are binary files, they contain information which you cannot easily edit using a text file. That does not mean that you cannot write programs to open those files to read them or write them, but you would have to go through the effort of actually writing a program to open and read an a dot out file or a program dot o file or you would have to use something else like an octal dump program to do the same thing. So, some of these files that you encounter in the process of compilation are friendly and easy to read. You could therefore, actually take a program, find a way by which ccc would give you program dot s and you could even edit, read and write and edit the program dot s file and then pass it through the rest of the steps of compilation. So, with this thought in mind, we will move forward in the next lecture towards actually seeing what happens to a C program in terms of ending up in an a dot out file which contains MIPS 1 instructions. Thank you.", "transcription_large_v3": " Welcome to lecture 6 of a high performance computing. We are in the process of looking at the MIPS 1 instruction set in some detail. In the previous lecture, we saw that the MIPS instruction set is a 32 bit, in other words, 32 bit word size, load store, in other words, the only instructions that take memory operands are the load and store instructions and it is a risk instruction set. We saw that the 32 general purpose registers, R 0 through R 31 of which R 0 always contains the value 0 and R 31 is implicitly used by some instructions. We saw that there are two other additional registers called high and low. We saw the addressing modes which are used by the instructions and also looked in detail at two of the categories of instructions, the data transfer instructions, in other words, loads, stores and moves to and from the registers high and low, as well as the arithmetic and logical instructions. We are in the process of actually looking at the table of arithmetic and logical instructions. Let me remind you that there are add and subtract instructions, which can also take immediate operands such as the second example. The logical operations also have the possibility of immediate operands. Immediates are all signed 16 bit quantities, which may have to be sign extended if they are to be operated on with 32 bit operands. All that remains to be understood in this table are the multiply and divide instructions, which have the oddity that they are the only instructions in the MIPS 1 instruction set, which do not seem to have a destination operand specified, which we need to decipher. So, let me just go through an example here. Here is an example of a multiply instruction, multiply R 1 R 2. So, we suspect that R 1 and R 2 are the source operands of this instruction and that what the instruction is supposed to do is to take the 32 bit value out of R 1, multiply it by the 32 bit value out of R 2 and the only question is what is it supposed to do with the result. Now, the first thing we need to understand is that the result is conceivably larger than 32 bits. Then, result could in fact be as large as 64 bits. And, I guess all of you, many of you may have an understanding of this from the decimal equivalent. If I have two, let us suppose I am talking about decimal. If I have two digit numbers and I multiply them together, you know that the result is in this case three digits, but in general could be as high as 4 digits. So, when I multiply 2 digit numbers, I could get a number which is as big as 4 digits. When I multiply 2 digit numbers, the result could be less than 4 digits, which is why in this comment up here, I mentioned that when I multiply to 32 bit values, the result could be as large as 64 bits. This is a problem for the MIPS instruction set, because you will remember that the size of all the general purpose registers is only 32 bits. And that is why when I do a multiplication of two register values, there is no guarantee that the result can fit into one of the general purpose registers. And this is where the high and low come into the picture. So, what happens in the case of the MIPS 1 instruction set is that the result of the multiplication, in other words the product is put into the registers high and low. And as the name suggest, the least significant bits of the product go into the register low and the most significant 32 bits of the product go into the register high. And you know that both high and low from the previous lecture, what I told you, both high and low are 32 bit registers. And therefore, the full 32 bits of the product can be stored and made available to the program. Now, typically you would expect that the program will have to check to see whether high is equal to 0 and if high is equal to 0, then we know that the full product is contained within the register low and proceed accordingly. Now, what about the divide instruction? The divide instruction like the add instruction has only two operands. For example, I might divide R 1 by R 2. In the case of divide, we know that the result is a quotient, but there is also another result, which is the remainder. How big could the quotient be? We know there is no problem with the size of the quotient, there is no problem with the size of the remainder, both of them can only be as large as 32 bits. The only question is where should those two go? So, this if I had to divide instruction in the MIPS 1 instruction set, the problem would then be where do I, how do I specify two destination operands? In other words, one destination operand for the quotient, the other destination operand for the remainder in a single instruction. And therefore, once again to bypass this problem, they use the high and low registers. The quotient goes into low and the remainder goes into high. And this is basically all we need to know about the multiply and divide instructions. So, in effect to do a multiplication or a division, one can use a single instruction, but there will be follow up instructions to to transfer the result, whether it be the quotient or the 32 bits of the product into one of the general purpose registers for the subsequent computation by the program. Now, with this we have actually completed our quick look at the MIPS 1 arithmetic and logical instructions and we will now move on to the control transfer instructions. In some sense, this is the largest and the most important category of instructions, because These are the instructions which are going to be used to create the control flow of the C program and is the part of programming which is probably the most challenging. Now, as you would expect in this table, I had prewarned you that I am going to have to talk separately about the conditional branch instructions or branch instructions, the unconditional jump instructions, a family of instructions which are going to be used to implement function calls, which are called the jump and link instructions and some other instructions. I have actually introduced the system call instruction at the bottom and I am not going to say anything about it today, but we will come back to this instruction after 7 or 8 lectures well into the course. Therefore, please do remember that I talked about the system call instruction and with that I will not talk about it anymore today. Now, in terms of the terminology used in this table, one piece of terminology that is used is the letter R is used in this table to stand for register. In some of the notation, I have used target sub 26 and target sub 26 is referring to an absolute operand of 26 bit size. Some of the instructions have a Z in their mnemonic and Z stands for 0. And the only other notation which I have used here is that in this particular meanings column, I have used a 2 pipe operator by which you can understand that I mean the the operation catenate or concatenate, whichever you are more used to. So, with this quick overview of the terminology, we can try to understand these instructions one by one. So, let me just remind you the conditional branch instructions or branch instructions and B stands for branch in all of these instructions will be used to transfer control of the program depending on whether a particular condition is true or not, as opposed to the jump or unconditional branch instructions, which transfer control unconditionally. So, if you look at this particular example, B L T Z R 2 minus 16 and look at the explanation, it looks quite mysterious. So, just think that I need to explain this in a little bit more detail. So, let me separately talk about the conditional branch instructions. So, we We now have the same example branch of less than 0 and the meaning of it, but let me just go through these mnemonics one by one. So, we have B E Q, we need to be able to read this in a slightly friendlier fashion than B E Q, B N E, B G E Z, etcetera. We know that Z stands for 0 and we expect that E Q stands for equal, that the condition which is being tested as a test for equality. Using that as a hint, we would guess that n e stands for not equal, g e stands for greater than or equal, l e for less than or equal, l t for less than and g t for greater than, which means that the kinds of conditions that are being tested by these instructions are equal, which you understand in C by the equal equal operator n e, which is not equal greater than or equal to 0. Remember that the z at the end of b g e z stands for 0 less than or equal to 0 less than 0 and greater than 0. So, these mnemonic suggest that these are the conditions which are being tested, whether two things are equal, whether they are not equal, whether something is greater than or equal to 0, less than or equal to 0, less than 0 or greater than 0. From this example, we understand that for the comparison with 0 instructions, the value inside a register is being compared with 0. So, this instruction basically means branch if less than 0 R 2, which you could read as branch if R 2 is less than 0 2 minus 16. Now, in the lead up to this in the in the previous lecture, in talking about the different addressing modes used by the MIPS 1 instruction set, I had indicated that the branch instructions use a PC relative addressing mode. And in the case of a branch instruction, the first operand is very clearly being specified in the register direct addressing mode and therefore, the PC relative addressing mode must be referring to the second operand, this minus 16, which is why we are not too surprised to see in the meaning column that the meaning of the minus 16 is that the way that the program counter is changed. In other words, the way that the control flow of the program is modified is by taking the old value of the program counter and subtracting 16 from it. At this point, we are not too sure why this plus 4 is there. That will become clearer when we talk about how the hardware to implement this instruction set could be implemented. But for the moment, we must understand that this is the meaning of the instruction that if I execute an instruction branch of less than 0 R 2 minus 16, the meaning is that if the current value of the register R 2 is less than 0, then the program counter will be modified. In other words, there will be a change of control to the program counter of the branch instruction plus 4 minus 16 and the minus 16 is because of the fact that this is PC relative addressing. So, something similar is going to happen for all of the instructions which compare with 0, but what about the instructions that do not compare with 0? In other words, the branch if equal and the branch if not equal. Once again, if there is a condition which is being checked, and the check of equality is being done, then two things must be being checked for equality. So, the question is what is the BEQ instruction look like? And the answer is that in the case of the BEQ and the BNE instruction, there are actually two operands. In fact, an example would be BEQ R1 R2 minus 16 and this instruction basically reads branch minus 16 if R1 is equal to R2, a branch if equal R1 R2 minus 16 and similarly, for the branch if not equal. So, this family of instructions is adequate for many of the conditions that you will encounter, but if there are conditions which do not directly fall under this set of six comparisons, then And one must find a way of achieving those conditions using these six instructions, which is the programming challenge or the challenge to the compiler. So, the idea of the BEQ R 1 R 2 minus 16 is that the branch will be taken if the contents of R 1 are equal to the contents of R 2, if the contents of these two general purpose registers are equal. Now, in terms of terminology, you should note that take the branch is the same as saying that the program counter will be modified to something that is not just what it would have been if the branch had not been taken. And in terms of terminology, one often talks about the target address of a branch. The target address is the branch, the address to which control will be transferred if the branch is in fact taken as indicated by the PC relative addressing mode. So, from now on, I will refer to target address of a branch or even the target address of a jump, because even for a jump, there is an address to which control will be transferred and in general, we will refer to this as the target address. So, in this particular example, the branch of less than 0 example, the target address is PC plus 4 minus 16 and as I said, we do need to understand what the plus 4 is, that will happen a little bit later. Now, in general for the examples that we write, it will be somewhat inconvenient to write minus 16 and then to actually count down in the program or up in the program to see which is the instruction, which is at that particular address. So, rather than planning on doing that for writing examples in this course, I will rather go for the alternative of using a C like notation for the specification of the PC relative addressing mode and I will explicitly just label instructions of the program and instead of putting the PC relative address of the target into the instruction, I will just put the label corresponding to the target in the instruction. This will make it a lot easier for us to read and understand code sequences in the rest of this course. So, I am postponing only one thing in this discussion and that is to explain to you in more detail where this plus 4 is coming from. Otherwise, we have understood how these conditional branches operate. So, let us move on to the jump instructions. So, in general, J stands for jump and I had indicated that R stands for register. In general, jump is unconditional control transfer. So, we have two examples, jump to target and the target is specified in the instruction as a 26 bit absolute. Remember that in our notation slide, I had talked about target 26 as being a absolute, something which is specified absolute addressing mode. In other words, it is inside the instruction. Second example is an example of J R, I am sorry, it should be an R here, J R R 5. So, what do these instructions do? The second instruction very clearly changes the value of the program counter, so that program counter now contains whatever value is present in R 5, but the first example is little bit more complicated as can be seen from the meaning. So, how is the program counter manipulated in the case of the first example? Now, the problem that we are seeing over here is, as any jump instruction is an unconditional control transfer, in both these examples, the objective is to transfer control to the instruction at the target address and the target address is specified using the absolute addressing mode. The target address itself has to be included inside the jump instruction. Since the target address is of size 32 bits, I am sorry, since the target address is going to be the address of an instruction, it will conceivably have to be 32 bits in size, but obviously, I cannot include a 32 bit target address inside a 32 bit instruction. We saw this problem in the previous lecture and therefore, the MIPS 1 solution is that you can specify, you are allowed to specify a 26 bit target address and the hardware uses this 26 bit target address to construct a 32 bit address by adding two zeros as the least significant bits of the final target address and the remaining bits of the target address are just taken from the current value of the program counter. In other words, the value of the program counter associated with the jump instruction itself. So, this is the way that they overcome the problem of not being able to specify a 32-bit target address. And, it may be a little bit restrictive in the range of addresses that can be used as targets for the jump instruction, but if that is the case, then one can always use the jump register instruction, in which case a 32-bit target address can be placed into the register. So much for the jump instruction and the jump register instruction. With this, we will move on to the next category of MIPS 1 control transfer instructions, the third line in our table. These are the jump and link instructions of which once again there are two and now we can read this as jump and link and this as jump and link register. So, what do these instructions do? First of all, let me just give you an example of each jump and link register. in the case of the jump register instruction from the previous table would have a single operand, which is a register operand, whereas the jump and link instruction has a single operand, which is a specified in absolute addressing mode. The meaning example over here applies to the first example, not to the second example. Now, the meaning that we see over here is little bit interesting. This is the first instance that we are seeing of a MIPS 1 instruction in which the meaning of one instruction involved two operations. Until now, all of the situations that we had, the meaning was a single line or if there was one situation where the characters would not fit on to a single line, but here there are two separate operations which are happening as part of the jump and link instruction, which means that this is a very special kind of an instruction. does more than one simple thing. And if we look at this in more detail, we see that it does two things. One of them is to manipulate R 31. It puts PC plus 8 into R 31, even though R 31 is not an explicit operand of this instruction. And then, it modifies the program counter to now contain the value of R 2. This is somewhat curious sequence of instructions. So, it seems to unconditionally transfer control to the instruction at the target address, that is what the second step is doing, while at the same time remembering PC plus 8 in register R 31, which is what the first instruction is doing, which is what the first operation in the sequence was doing. So, this is the first example that we are seeing in the MIPS 1 risk instruction set of an instruction, which is doing something, which is not a really primitive operation, because in order to describe this, I had to use two primitive operations and therefore, this must be really important, otherwise I would not have included it. It is a deviation from the risk, in some sense, it is a deviation from the risk design principle of trying to have each instruction doing only one single operation. So, as we will see, I believe in the next lecture or possibly the lecture after that, This instruction is critical for the implementation of function calls and that is why it had to be included and how it is useful in the implementation of function calls, we will see shortly. Now, before actually moving to looking at programs in general, I wanted to just try to relate control transfer as you know it from your C programming experience to control transfer as we must view it now in the MIPS 1 world. Now, here in this table, I have three examples of control transfer from your C world. There is a go to statement, there is an if then else construct and there is a repeat loop. The repeat loop is one example of a loop. I could just as well have included a while loop or a for loop and after having gone through the repeat loop, you will see that you yourself will be able to fill up the table with additional entries for a while loop or for a for loop without any great difficulty. So, the purpose of this table is to give us a good understanding of what we can expect to see in a MIPS 1 program in place of a goto. In our original C program, if there was a goto, what can we expect to see in the MIPS 1 program? If in our original program, there was an if then else, what can we expect to see in the MIPS 1 program and so on? So, let us start by thinking about the goto. So, go to in a C program is an example of a unconditional control transfer. Control is to be transferred to the statement, which has that particular label unconditionally. No need to check any condition. So, we suspect that in the MIPS world, the same effect is going to be achieved using an unconditional control transfer instruction. Therefore, either a jump instruction or a jump register instruction. Since in our current terminology, I am using labels in my MIPS 1 programs, what we will actually see is that, goto label will map into a jump instruction to the same label. Now, there is a remote possibility that you will recall that under this notation, label will be specified using the target 26 absolute addressing mode inside the jump instruction. Therefore, as I had suggested in the slide on jump instructions, there is a possibility that if the target address of this jump, in other words, the label, the statement which is labeled by the goto is very far away from the jump instruction, then it may not be able to specify it appropriately using a 26 bit target address. And you can work out the kind of constraints, which this places on the distance between the goto statement in your C program and the label and try to figure out if it is really that much of a constraint after all. Remember that this is 26 bits, which is actually a fair distance in terms of instructions, even if each instruction is 4 bytes in size. So, the go to a label will simply map into a jump to a label in almost all cases that you will encounter. Next, let us consider the if x greater than 0, the if then else construct. Now, I am using a generic kind of a example here. In general, there will be some condition. I am using the example of the condition is x greater than 0. X is a variable. In the then part, there could be any number of statements. So, I am just indicating it by then part. In general, if there is more than one statement, you would have to enclose that by braces and then there is the else part, which could be once again any number of C statements. So, very clearly in the MIPS 1 equivalent, I will specific to this example, I will have to start by loading the variable x into a register, because in the MIPS 1 branch instructions, I clearly have to use a branch instruction, a conditional branch instruction to check this condition. The MIPS 1 branch instructions all take their operands out of registers. Therefore, I will have to start by loading the value of this variable x into a register. Then I will have to compare its value to 0 and that I can do using one of those six branch instructions and appropriately, the target of the branch will be a label, which will relate to the then part or the else part. So, the way I am suggesting to work it out is as follows. I will start by getting the value of the variable x, which is related to this condition into a register, let us say R 1. Then, in this particular case, I want to transfer control to the then part, if x is greater than 0. I have a way to check if x is greater than 0 in the MIPS 1 instruction set using branch is greater than 0. So, if r 1 is greater than 0, then I transfer control to then part and I have a label called then part later in my program. What if r 1 is not greater than 0, then I want to transfer control to else part. Therefore, in this example, I have put my else part as a label immediately following the conditional branch instruction and therefore, this sequence will achieve what was required by the if then else. Now, we need not be too concerned that the then part appear before the else part in the C program and in the MIPS program, the else part appears before the then part. That is just inconvenience if one is, that is not even an inconvenience because ultimately, one does not really read, typical C programmer does not even read the machine code, machine language code. Now, finally, we will go to the repeat loop and I am again using a somewhat generic example. repeat any number of statements, which are inside the repeat loop, until this condition becomes true. In other words, until x becomes not equal to 0. Once again, x is the name of a location in memory. It is the name of a variable. Therefore, I will have to start by loading x into a register. Subsequently, I can use a conditional transfer instruction, which will transfer control to the beginning of the loop body. Therefore, the way that this could be mapped is, I have an instruction to load x into a register, then I have the loop body and I have associated with the loop body a label called loop head. So, that at the end of the loop body, I check if R 1 is equal to 0, R 1 contains the variable x. Therefore, if R 1 is equal to 0, that means that this condition is not true and therefore, I have to go back to loop head and therefore, this achieves the effect of this loop. So, just note that even though the condition was not equal to 0, I ended up having to use a condition of equal to 0, because the not equal to 0 was the exit condition for the loop and therefore, the continue condition for the loop was equal to 0. here once again that I am comparing R 1 with 0 by using register R 0. I had to do this because I am using the BEQ instruction which has two operands. Now, just to make sure that notation here is clear, inside my C statement, there was a loop body which contained any number of C statements. Now, those C statements would get translated by the compiler into some collection of machine MIPS 1 instructions. Therefore, The number of instructions in the loop body will depend on the number of statements in the loop body over here and would depend on how the compilation is done. But with this, we understand that essentially the important seek control transfer constructs that we are aware of can be achieved using the conditional unconditional transfer instructions of the MIPS 1 instruction set. So, with this we have actually seen the data transfer instructions, the arithmetic and logical instructions and the control transfer instructions and we are ready to actually look at real examples of MIPS 1 programs. Now, as it happens if I had made available to you a MIPS 1 instruction set architecture manual then you would have read about the details of each of these instructions separately. of these instructions would have been mentioned on a separate page and on the page, they would have given you all the information that I have gone through over here, but some additional information as well. And some of the additional information that would have been included in the instruction set manual would have been of some relevance. And therefore, as important aside, I am going to mention two kinds of important comments that one might see in an instruction set architecture manual relating to some of the instructions that we have come across. So, I have labeled this slide as interesting notes from the instruction set architecture manual. Now, for some risk instruction sets, you find comments like this associated with load instructions in the instruction set architecture manual. The loaded value might not be available in the destination register for use by the instruction immediately following the load. This is obviously a comment, which was included on the page of the load instruction as a warning to the programmer, because by default, the programmer might assume that if there is a load instruction, that the value which is to be loaded into the destination register will become available as soon as the instruction has been executed. Therefore, the instruction immediately following the load instruction can definitely use the value in the destination register. This is an explicit warning that that is not the case and it is present in apparently in the instruction set architecture manual of many MIPS 1 processors, which is why I have included it here. In fact, this warning is so important that it is given a name. It is given the name of load delay slot, indicating that the programmer must be aware that load instructions have a delay associated with them. They do not cause the destination register to become updated as soon as you would imagine. There is also this element of uncertainty. If you look at this description, it says that the loaded value might not be available, suggesting that in some implementations of the MIPS 1 instruction set, the loaded value might be available or for some programs when they execute for some reason, the loaded value might be available, but in general the programmer cannot assume that the loaded value will be available. This is one kind of warning which you might see. Another kind of warning which you might see in an instruction set architecture manual for a risk like architecture like the MIPS 1 relates to control transfer instructions and might say something like this. The transfer of control takes place only following the instruction immediately after the control transfer instruction. Now, typically you would expect that the transfer of control will take place as part of executing the control transfer instruction and therefore, the instruction following the control transfer instruction is clearly covered, but here we have explicit mention that the transfer of control takes place only after the instruction, immediately after the control transfer instruction has been executed. Therefore, definitely a very important warning to a programmer, particularly if he is programming, he or she is programming in this language, in the machine language. And once again, this is such an important warning that is available in on the page of the instruction set architecture manual and is given a name, the branch delay slot, a warning that control transfers are also delayed. They do not actually happen as fast as you would have thought. So, from the programmer's perspective, what does this mean? Let us just try to understand what I as a programmer should do if I see these kinds of warnings in a instruction set architecture manual. Let us start first with that warning about load instructions. So, the warning was something to do with when the destination value, when the loaded value will be available in the destination register. So, let us suppose that I have written a program and there is a load word instruction. Remember, this is a load word instruction. We saw this not too long back. A load instruction in general causes a value to be copied from main memory from a variable in main memory into a register. In general, this is an instruction which will do such a load of a word size or 32 bit size quantity. The destination is to a register R 1 and the source is specified as a base displacement addressing mode. Base displacement addressing mode, the address of the memory operand is calculated by taking the contents of register R 2 and adding to them the immediate constant, which in this example is minus 8. So, this computation results in the address of the operand, which can then be sent to memory so that the data can be fetched. Now, if I have written a program, which uses such an instruction, I might have also assumed that I mean typically I would have written such a this instruction in my program, because I want to use the value of this variable out of the register R 1. Therefore, it is not too uncommon that the next instruction in my program would be an instruction that uses R1. The next instruction in this particular example is an add instruction, which adds the contents of R1 to the contents of R2 and puts the result into the register R3. Remember that in our notation, the destination register is written first and then the two source registers. So, the add instruction that I have shown you here is a register that uses the value of R 1 and it is possible that my intent in writing a program containing these two instructions one after the other was that I want, let us say, a value to be loaded from memory. Let us suppose that minus 8 R 2 corresponds to my variable x. So, I wanted the value of x to get loaded into the register R 1 and then I wanted to use that value in the addition. That might have been why I did the load followed by the add. Then, I must take into account the warning, which tells me that for a load instruction, the loaded value might not be available in the destination register for use by the instruction immediately following the load. Therefore, clearly the situation that we have here is dangerous. There is no guarantee that the add instruction will get the value that was loaded. Therefore, I as a programmer must or as a compiler writer must adjust for this warning. The way that I could do it is by making sure that the load instruction and the instruction following it, which uses the loaded value are not consecutive, but separated. So, for example, if I take a modified version of the same code segment, where there is the same load instruction and the same add instruction, but I put between the two an additional instruction such that the instruction in between does not use register R 1 for its own sake, then I find out that the add instruction even under that the warning which had been provided in the instruction set architecture manual will safely receive the loaded value as loaded into R 1 by the load instruction and the program would run as I had expected or as I had planned originally. So, this is the kind of warning which one must very clearly take into account as a machine language program or as a compiler writer. In this particular example, the situation was that the instruction set architecture manual warned me that there was this one load delay slot that the load instruction and the instruction using the loaded value have to be separated essentially by at least one instruction. But there could have been a more serious warning for example, that there is a need for two load delay slots in which case I would actually have had to write a program in such a form that there were at least two instructions between the load instruction and the next instruction using the loaded value. And the warning would have been written in a form that would be understandable to us in this light. Now, moving on from this to the next warning, the warning about control transfer instructions. Now, this will relate to a program, I will have to give you an example where there is use of a control transfer instruction. So, I will use the example which we have already seen, the example of the repeat until. Remember that I implemented the repeat until using a label on the beginning of the loop body followed by a conditional branch instruction the BEQ. The condition is related to register R1. Now, the situation that I will have here relating to the warning, let us just remember what the warning was. The warning was for a conditional for a control transfer instruction, the transfer of control takes place only following the instruction immediately after the control transfer instruction. So, how does that relate to code segment that we have over here? Now, according to that warning, the transfer of control to head, which is the control transfer that I am interested in, that is the correct implementation of the repeat until, will not happen as a side effect, will not happen just after the branch instruction, but will happen after the next instruction in the program has been executed. In other words, the diagram that I should have in mind is something like this, given the warning that we had just read from the MIPS 1 instruction set architecture manual. In other words, if I had written the program assuming that immediately after the branch of equal instruction control is transferred to head such as by the dotted arrow, then I would have misunderstood what was happening. In fact, the statement immediately following the branch, whatever it is, is also part of the repeat loop. It will be executed along with loop body regardless of what else may be present in the program. Therefore, thanks to our warning from the instruction set architecture manual, I must account for my repeat until implementation by possibly taking the last instruction from loop body, if that is correct and putting it after the branch if equal instruction. So, this is very important warning and it is so important that in fact, it is referred to as the branch delay slot even in the MIPS manual. And once again, it is conceivable that for other processors, you may see that there is a warning of two branch delay slots, which essentially tells you that two instructions after the control transfer instruction will be executed before the control transfer takes place. So, these are clearly very important and we will take both of these warnings into account. In fact, I think for most of the examples, I will work under the assumption that there is one branch delay slot and one load delay slot in writing the code. So, that this idea of having to be aware of warnings of this kind will sink in. So, the correct implementation would have head, loop, body, branch instruction and then the next instruction with realization that the control transfer happens only after the next instruction. Now, that we have seen all the MIPS instructions, we can actually move to the last part of the instruction set architecture manual, which is information about what each instruction looks like. Now, remember the instructions of your program as generated by the compiler end up in main memory. Your program executes out of main memory and if the instructions are to be in main memory, they are going to be represented in binary. Therefore, up to now I have been talking about instructions such as branch equal using very readable notation. In reality, that instruction is going to be in the form of some binary sequence, a 32 bit binary sequence. What is the 32 bit binary sequence look like? That is what we learn in the portion of the instruction set architecture manual, which tells us about the encoding of instructions. Now, the MIPS 1 instruction set has three different formats for instructions. The first format or the first instruction format is what is called the R format and it describes the diagram that we have over here, describes what a 32 bit instruction which is in the R format would look like. In other words, what the different bits from the least significant bit to the most significant bit are used for and you notice that there are many different fields in this instruction. There is an opcode field which we suspect must be some encoding of the operation, opcode probably stands for operation code. There is a field labeled SRC1, which is a specification of the first source operand, second source operand, destination operand and various other fields. Now, if we look at the size of each of these fields, we notice that the each of the source operand fields is a 5 bit field. What does it mean to have a 5 bit field? A 5 bit binary value can take on values between 0 and 31. And this is consistent with our understanding that in MIPS 1 instructions, if there is a register direct operand, one has to specify the identity of that register. So, any register between R 0 and R 31 can be specified using the appropriate 5 bit sequence. So, the foremost question in your mind at this point will be, what are the different instructions which will be encoded using the R format. Very clearly, any of the arithmetic or logical instructions which does not use an immediate operand can be encoded using this format. For example, consider add R 1, R 2, R 3. So, how will the add the fact that this is an add operation, how will that be encoded? That would be encoded using the opcode bits as well as possibly using the last 6 bits which are labeled as function code. So, there are 12 bits available for encoding the fact that this is an add instruction. The fact that the destination operand is R 1 and that the source operands are R 2 and R 3 would be encoded using the SRC 1 field, SRC 2 field and the destination field. So, what exactly would one see in the SRC 1 field for this particular example? This is a 5 bit field, what I want to see over there is an indication that R 2 is the first source operand and therefore, I would see the value 2 in that 5 bit field. What would the value 2 look like in that 5 bit field? It would look like 0 0 0 1 0. So, that is exactly what I would expect to see in that 5 bit field. Similarly, for R 1 over here, I would expect to see 0 0 0 0 1 and for R 3, I would expect to see 0 0 0 1 1. So, ultimately when all the bit fields are filled up, we could understand exactly what the add R 1 R 2 R 3 instruction looks like in 32 bits. You can find out what the different fields are, the different values, the exact values of the opcode and the function code bits for the add instruction by looking up the appropriate section in the MIPS 1 instruction set architecture manual, if you needed to. So, basically the arithmetic and logical instructions, which use only destination operands and source operands could be encoded using the R format. The second MIPS 1 instruction set format is the I format or immediate format, because it has a 16 bit field inside the instruction. In addition, there is a 6 bit opcode field, a 5 bit source operand field and a 5 bit destination operand field. And as you would imagine, any instruction of the MIPS 1 instruction set, which has a 16 bit displacement or immediate value can be encoded using this format. In fact, will be encoded using this format and there are several examples. For example, the arithmetic and logical instructions, which have immediate operands, the immediate Operand is a 16 bit field, requires a 16 bit field. In this particular case, it requires only a 3 bit field, but in general the immediate Operand would be assigned 16 bit value. Therefore, the operation would be encoded in the opcode field, the destination register would be encoded in the destination field, the first Operand which is the source Operand in register direct addressing mode would be encoded in the SRC 1 field and the 16 bit immediate Operand would be encoded in the constant field. So, in this particular example, I would see the value 8 in 2's complement in this 16 bit field. In other words, I would see a lot of 0's followed by 0 0, I mean lot of 0's followed by 1 1 1 0 0 0, which is the 16 bit 2's complement value representation of 8. What other instructions would be encoded using the I format? Think about the load instructions or the store instructions. All of these instructions operations have their operands specified in base, I am sorry, base displacement addressing mode. They have to specify a base register and a destination and a displacement and the displacement is a 16 bit sign displacement. So, once again, this I format could be used for this purpose. The fact that it is load word could be encoded in the opcode field. The destination can be encoded, I am sorry, the displacement can be encoded in the constant field, the base register can be encoded in the SRC 1 field and the destination in the destination field. Anything else? Yes, even the conditional branch instructions can be encoded using this format. Recall that for the case of the conditional branches, the target is specified as a PC relative displacement and the size of the displacement is 16 bits. So, once again, there is a situation where the 16 bit field can be used for this purpose. The opcode would be used to indicate that this is branch of less than 0 field. The R 1 would be encoded in the source 1 field and the 16 bit PC relative displacement in the constant field. So, many different instructions actually find their encoding using the I format. There is one other format in the MIPS 1 instruction set and that is the J format. The J format has a large 26 bit field and as you suspect, this is necessary for the instructions which have a target 26 absolute addressing mode of which we know there are a few. There is the jump instruction and there is a jump and link instruction. So, for example, if there was a jump and link instruction in your program, in our notation I will represent the target by a label, but in the instruction, the label would have to be encoded by its actual 26 bit address using the mechanism that we had seen in the slide about control transfer instructions. So, with this, it turns out that only these three formats are necessary and all the instructions that we have seen and other instructions which we have not seen like the system call instruction etc can all be encoded using these three formats. With this, we are actually in a position where we can move forward to start looking at programs and in order to do this, I just wanted to remind you a little bit about what happens when you write a C program and how it moves towards being a program in the machine language. And so, I will quickly recall something from one of our earliest lectures. that when you write a program in C, you have to actually compile it using a step called the compilation step and that the net result of doing this is that your C program file, which was a file containing your program, which you typed into a file using a text editor or something like that, was compiled translated into an equivalent program and the default name of the output file of GCC is a dot out. So, the output of GCC is the file called a It is a file containing an executable equivalent program in machine language. In other words, it could be, if the machine that you are working with is the MIPS 1, is a MIPS 1 machine, then this would be an equivalent program in the MIPS 1 machine language. Now, we saw that this translation happens through a series of steps. So, your program, program dot c goes through CPP, then it goes through CC1 and along the way a temporary file called hello dot, I am sorry this would be program dot s was created, which was used by one of the other steps, another temporary file called program dot o was created, which was merged with other files to generate an a dot out. We saw this in an earlier lecture. In this lecture, what I would like to point out is that there are these different files program dot c, program dot s, program dot o, library file, a dot out, all of which are potentially available to you to look at as I had mentioned earlier. Now, some of these files are going to be easy to look at. For example, you can always look at program dot c and you may be able to find a mechanism by which you can look at program dot s. Now, both program dot c and program dot s are text files and you will be able to look at these files. You can read them or write them using a text editor, whatever text editor you are used to using, whatever mechanism you use to create the original program dot c. On the other hand, the other files that I have mentioned on this slide, program dot o, a dot out and the library files are not text files, but they are object files. In other words, they are binary files. They contain information which you cannot easily edit using a text file. That does not mean that you cannot write programs to open those files to read them or write them, but you would have to go through the effort of actually writing a program to open and read an a dot out file or a program dot o file or you would have to use something else like an octal dump program to do the same thing. So, some of these files that you encounter in the process of compilation are friendly and easy to read. You could therefore, actually take a program find a way by which CCC would give you program dot s and you could even edit read and write and edit the program dot s file and then pass it through the rest of the steps of compilation. So, with this thought in mind, we will move forward in the next lecture towards actually seeing what happens to a C program in terms of ending up in an A dot out file, which contains MIPS 1 instructions. Thank you." } }