Saturday, January 25, 2020
Polymorphism In Object Oriented Design Information Technology Essay
Polymorphism In Object Oriented Design Information Technology Essay In large scale organizations measuring software quality is complex for the development of software product. High quality software would enhance the potential reuse of the software and reduce the software maintenance cost. Many of the presently available software metrics are applicable when the particular software product Polymorphism has been assured to improve reusability technique which is achieved through the POF, Polymorphism factor derived from the MOOD (Metrics for Object oriented design) metric. II BACKGROUND Software metrics are considered as key facts or ways to measure any kind of software product from the starting phase to the obsolete phase. These software metrics are available and used throughout the project to assess the software quality, cost estimation and time consuming to develop a software project. It is declared that the Traditional metrics are not applicable to Object Oriented design since Object Oriented Software metrics are applicable to class level and other characteristics such as abstraction, inheritance, modularity and Polymorphism. These metrics convey the software engineer how the modifications can be made in order to reduce the cost effectively and time consuming while designing software and to improve the quality, continuing capability and profitability of the software. In [7] nearly 100 metrics had already given to find out the complexity of the software code and in [8] they were more than 150 metrics are proposed in the field of Object oriented paradigm. The Object Oriented software metrics are differentiated into two types namely Static and Dynamic. Dynamic and Static code analysis are done during source code reviews. Static metrics are derived from the measurement on static analysis of the software code. Static code analysis is performed without executing any of the code. Static analysis is better to understand the security problems within the program code and can easily identify nearly 85% of the flaws in the programming code. Dynamic metrics are derived from the measurement on dynamic analysis of the software code. Dynamic analysis is based on the studying of the code behavior during execution. Earlier major work was focused on static metrics but more attention has given to Dynamic metrics as the results are derived at run time. In this binding it can be substitute to forms or objects that are related to polymorphism factor at the time of execution. III FORMS OF POLYMORPHISM Polymorphism is considered as one of the salient features of Object oriented Programming languages. This feature mainly deals with reusability technique. In object oriented analysis Polymorphism roots from message passing, substitutability and inheritance which yields to is-a relationship. This similarity may allow achieving variety of technologies like code sharing and reusability. Mainly Polymorphism is differentiated into 5 types in Object Oriented paradigm like generics, pure polymorphism, overloading, deferred methods and overriding, Both Polymorphic classes and methods are known as Generics. This paper mainly focuses on three definitions mainly Overloading, Overriding and pure polymorphism. Pure polymorphism is achieved by implementing same function to different types of arguments. Method overriding is possible when the implementation suggested in the super class is modified in the sub class. Next to that method overloading is achieved when different methods having same name. Method overloading is also known as Adhoc Polymorphism. The differentiation on Polymorphic behaviors in C++ are related to runtime binding decisions which are overriding methods or virtual functions and compile time linked decisions which are over loading functions. From this design polymorphic functions are categorize into different types. The Polymorphic member functions can formerly defined as where in a new declaration these characteristics may alter while the other remains the same. This will generate different types of Polymorphic implementations that affect the quality on Object Oriented paradigm. A) Pure Polymorphism This behavior is also known as parametric overloading where it can be identified similar name with different signature which is inside the class scope. This behavior inside a class scope is identified by implementing more functions with other signatures. B) Static Polymorphism In Object Oriented design methods related to similar name but with other method signature can be identified in other classes related and which are unlinked by inheritance relationships, which is also known as Overriding behavior. In C++ they can be categorized overriding methods into two different forms like virtual and non-virtual methods. The non -virtual overriding functions can be identified by other signatures which are related to other declarations. As they are based on Static binding can be called this from static Polymorphism. C) Dynamic Polymorphism This behavior has the capability to use the similar name with similar signature in an overriding function. In C++ it is called as adhoc polymorphism as it invokes run time decisions at execution by the compiler. Combination and Specialization are the two features derived from the Object Oriented design which results Dynamic and Static Polymorphism. The polymorphism forms discussed above consists single perspective where each pattern may be identified and combined to achieve class inheritance relationships. Fig 1. Simple Inheritance. Simple Inheritance states that one Parent can have many children, but each child can only have one Parent. Fig 1 illustrates that A is a Parent, B and C are Children to Parent A. Likewise P and Q are children to Parent B and M and N are children to Parent C Table 1 shows the derived set of metrics where it can be taken as the combination of dynamic and static polymorphism forms with respective to the simple inheritance relationship [2]. IV POLYMORPHISM FORMS METRICS Metrics Definition SPD Static Polymorphism in Descendants. DPA Dynamic Polymorphism in Ancestors DPD Dynamic Polymorphism in Descendants SPA Static Polymorphism in Ancestors. OVO Overloading in Standalone classes Table 1 In Object Orientation paradigm, generic methods and classes are able to decrease the description of newly created objects and classes. The OVO (Overloading in Stand-alone classes) metric is designed to calculate the intensity of method genericity in a class scope by numbering and calculating the function members which designed the same function. The Static and Dynamic metrics are designed to calculate each Static and Dynamic binding separately in Object oriented paradigm. Example 1 Class P { void a (int k); void a (float l); void a (int i , int l); void b (); void b (int n); } Class Q : public P { void a (); } Class R : public Q { void a (); void b(); } Class S { void a (); } A) Data Validation In order to validate Polymorphism metrics, product metric validation methodology is very useful to validate other suits of Object Oriented design metrics. This validation represents the capability of polymorphism metrics to predict the fault prone functions. Here the data is gathered from an open multi-agent systems development environment, known as LALO. [2] LALO has been maintaining its development since from 1993 at CRIM (Computer Research Institute for Montreal) where it approximately includes 40k source lines of code and 85 C++ classes. Here the analysis has been obtained from the Source Lines of Code, information about its classes and fault related data. B) Descriptive Statistics Table 2 shows the analysis for descriptive statistics which are related to Polymorphism metrics. Due to the reason that LALO objects contains very low and minor inheritance depth and week variance has been calculated from the descendants of polymorphism namely SPD and DPD. Here it is confirmed that week distribution of polymorphic forms is due to low usage of inheritance in LALO classes. Metric Max Min Mean Median StdDev OVO 15.00 0.00 3.47 3 2.71 SPA 18.00 0.00 3.54 1 4.63 SPD 111.00 0.00 3.73 0 13.87 DPA 5.00 0.00 0.73 0 1.29 DPD 28.00 0.00 0.77 0 3.35 SP 111.00 0.00 7.28 3.5 13.90 DP 28.00 0.00 1.50 0 3.49 Table 2 C) Polymorphism metrics and MOOD metrics Table 3 shows the correlation among the five polymorphism metrics and POF (Polymorphism factor) measurement derived from the MOOD (Metrics for Object oriented Design) metrics [6]. As per the analysis the POF from the MOOD metrics, SPA and DPA metrics from the Polymorphism metrics are maximum correlated as they captured the same forms of polymorphism namely overriding. OVO, SPD, DPD and POF have very poor correlation. OVO SP DP SPA SPD DPA DPD Pof .06 .50 .42 .98 .02 .72 .00 Table 3 D) Correlation between Polymorphism metrics. SPA and DPA calculates the count of member functions that are overridden between a class scope and its ancestors. This states that dynamic polymorphism and static polymorphism have similar quality impact in ancestors relationship. Since DP and SP are counted from the metrics SPA, DPD, DPA and SPD where DPA and SPA are nearly correlated and DPD and SPD calculations are not statically specified in the dataset and the expectations are SP would be maximum correlated with DP. SPA SPD DPA DPD SP DP OVO .06 .05 .04 .07 0 0 SPA 1.0 .03 .71 .00 .51 .42 SPD 1.0 .01 .67 .27 .13 DPA 1.0 .00 .43 .63 DPD 1.0 .22 .29 SP 1.0 .70 Table 4 F) Relation between Polymorphism metrics and Chidamber and Kemerer metrics. Here is the comparison between Chidamber and Kemerer (CK) metrics and Polymorphism metrics [9]. The six CK metrics are RFC Response for a Class. DIT Class Depth Inheritance Tree. CBO Coupling between Class. WMC Weighted Method per Class. NOC Number of Children. LCOM Lack of Cohesion among methods. a) DIT DIT is used to calculate maximum tree depth from class to root class. The high DIT has been found to lead more faults. However, more the tree depth, more the reusability factor because of method inheritance. Fig 2 Fig 3 Fig 4 Fig2 resembles simple inheritance where each parent can have more than one child but each child must have only one parent. In Fig3 P and Q are combined and inherited from Parent A and Fig4 shows high DIT as it illustrates high tree depth inheritance. b) NOC NOC counts the immediate sub-classes of a root class. As maximum DIT indicates the high tree depth, NOC counts the breath of the class hierarchy. High NOC increases the number of child classes, high reuse of the Base class is possible. High NOC has been found to lead low number of faults because of high reuse of Base class which is more desirable. Table 4 shows the values to compare only two CK metric represents a relation with two polymorphic metrics [2]. The highly related pair of metrics is NOC-SPD. Also the highly correlated metrics is DIT-SPA where these two relations are highly expected. Table4 Children in the classes are counted by NOC metric. Level of coupling is measured by SPD metric due to static Polymorphism having its descendants. Due to static polymorphism chances of having coupling is higher as greater the number of children. Inheritance depth tree is calculated in a class by using DIT metric. As per the static polymorphism, level of coupling with class R in Example 1 and its ancestors is measured by metric SPA. While measuring different forms of class dependencies in Polymorphism, it is observed that CBO represents a very low correlation with polymorphism metric which is calculating algorithm class coupling. Hence it is confirmed that the polymorphism metrics can get variety of variations of class scope coupling which are not highly captured by CBO metric [2]. A backward and forward logistic regression is performed while examine the relationship between CK metrics and Polymorphism metrics [2]. By combining these two Object oriented metrics a significant methodology related to fault prone class has been implemented. V REUSE, REUSEABILITY METRICS IN POLYMORPHISM As I discussed earlier, Reusability is the key concept to achieve Polymorphism, following are the metrics discussed under the concern of Reusability [1]. The key factor of developing and upgrading software by using the existing software has made our Engineers to focus more on systematic re-use where the organizations can make most benefit and get extensive advantage from the idea of reusability. A) Reuse and Reusability Metrics Reuse metric is obtained from number of efforts that takes place in the research area on economics of reuse. If the return on investment is positive then it can be analyzed that reuse is beneficial. It consists of maintaining the cost for reusing the libraries, modifying costs and reusable maintenance from the existing property, cost of searching, evaluating, identifying, integrating and selecting the potential articrafts. Different economic metrics has been introduced to measure the reuse metric which implements the cost benefit ratio. Durek and Gaffneys reuse metric based on economic factor is proposed and useful to measure the break -even point and the reusability cost. COCOMO based reuse model is used to calculate the total time that takes to implement the software with the functionality of reuse which is proposed by Gustafson and Balda [1]. Durek and Gaffneys reuse economic metric is represented as C= (b+E/n-1) R+1 Where C = Cost related to Software Development, usually less than 1. R = ratio of the reused code in the project. E = Cost to make code reusable, which is cost related to new code which is used to develop a component for reuse. B = reusing existing code in the project where the cost related to new code like adaption, searching and adaption cost. n=the number of reuses expected. As our interest is ultimately in measuring the cost but these metric needs the cost of creating the reused articrafts as input. B) Reuse Metric Level This reuse level metric is introduced by Frakes which implements threshold levels to gather and remove the items which are not reused very often. Suppose if the threshold level is 4 the object should be called 5 times for reuse. This metric also varies between the external and internal level. Total reuse level=Internal + External level. External reuse level=EU/T and Internal reuse level=IU/T. T=both internal and external items in the system. IU=number of reused external items EU=number of reused internal items. Each reused item which is EU and IU must have the values either 0 or 1.If the threshold level is less than the item reused then the value is always 1 or else it will be 0.This measure does not consider the number of items reused. Here this metric uses item instead of SLOC, as each item varies, some item would be large or small when compare to other items. In the later version of this metric, depending upon the size of the item, assigned a weight to each item. This is done because the threshold levels should be same in order to compare the projects. C) Measuring Polymorphism As the process would vary from execution to execution, the problem is with while measuring polymorphism, exactly what happens within an application. Example 2 List l; If(condition){ l=new ArrayList(); } else l=new Linked List(); l.add(); In Example 2, the Linked List implementation and the Array List implementation depends on the external condition. There is no prediction that the condition is always satisfied, with the implementation of the polymorphism. Dynamic analysis is the right way to measure polymorphism as it gathers the information at run time. Following is the metric that defines the amount of polymorphism has occurred [1]. PBI=UPD/Total Dispatches. Where PBI=Polymorphic Behavior Index. TD=UPD+NP. UPD= Polymorphic Dispatches that are unique NP=Non-polymorphic dispatches that are unique. TD=Total Dispatches. In the above example, List l=new Array List (); The List interface will declare the variable l and it is said to be Declared Interface. If the condition is true. Then Array List () is the Dispatched class in this Example. Both the Class and Interface are different and the relationship can be given as Conforms and implements. An interface might not directly implement by a class, it can be accessible through parent class. The actual method which is processed is the deepest inherited implementation of the method. Example 3 Class A{ void method 1(){} } Class B extends A{ B b=new B(); b.method1(); } The b.method1 () is dispatched to A.method1 () because it is the deepest inherited implementation method. Polymorphic Dispatch will be observed when the Dispatched Class and declared interface, both are different. Non -Polymorphic Dispatch is observed when the Dispatched Class is same as the declared interface. Example.4 Interface P{ a (); } Class Sample inherits P{ a();{} b();{} } Class Sample2 extends Sample{ b();{ super(); } } When Sample sp=new Sample(); sp.a(); The sp variable relates to a Sample object. The Sample class consists method a() implementation, so Sample.a() will be dispatched. The Dispatch class, Sample is same as the declared interface, so this is considered to be a Non-Polymorphic dispatch. Sample2 sp2=new Sample2 (); sp2.a(); Since sp2 relates to an object of type Sample2 the dispatched method is Sample a ().Although Sample2 not requires a method a() implementation ,Sample.a() is the deepest inherited implementation method. Sample is the class that is dispatched here. This is Polymorphic dispatch as the Dispatched class is Sample and declared interface is Sample 2. P sp3=new sample(); sp3.a(); Here in this case, since Sample.a() is dispatched, the Dispatched class is Sample and Declared interface is P . As Dispatched class Sample and Declared interface P both are different this is Polymorphic Dispatch. D) Inherited Method Call using Inherited Class. Sample sp4= new Sample2(); sp4.a(); Here the dispatched method is Sample. a () while declared interface of sp4 is Sample. So this is considered to be a Non polymorphic dispatch as both the declared interface and Dispatch class is same, Sample. Sample sp5=new Sample2(); sp5.b(); Here sp5 implements the inherited method b() from Sample2. Sample2.b() overrides the Samples b() and this inheritance substitutes the functionality from the Samples b() method, Here, dispatched method is Sample2.b() as it is the deepest implementation method . Hence it is known as Polymorphic dispatch as the Dispatch class is Sample2 and declared interface is Sample means both are different. ` VI INTERNAL AND EXTERNAL REUSE Software reuse is often differentiated as External reuse and internal reuse. Considering internal reuse, these are the calls made to the desired code that is previously mentioned for the given application. External reuse is considered when calling the code that is coming from external source to the application of the project. When comparing both the internal and external reuse, external is more beneficial than internal.But both internal and external reuse are been used, for instance if the developer reusing the same method rather than developing new methods which performs the same action. Internal methods are created by the application developers, which are defined as methods. Internal methods also include implementing the custom code extending the API and the API interface. When any method calls an internal method then internal method call is generated. Generally it happens internal methods calling to internal methods. Sometimes API will call these internal methods when these methods are passed as handlers into the API.As this external method to internal method calling is a strategy of application developers decision, these are still considered as an internal method call. External methods are obtained from external source where developers cannot modify it, which includes external libraries and API. Suppose if internal method calling an external method then call to the external method is generated. Here it must not be considered external to external method calling. Example 5 Class sample extends java.lang.Thread { void method p () { method q (); } void method q (String s) { System.out.println (This is an Example); } public void run () { method p (); } public static void main(String args[]) { this .start() } In Example 5 when the Sample class is executed, Java launcher calls the main () method. As the Sample class extends thread class and calling the start () function will insert the Thread class in Java Scheduler. As a consequence Java scheduler calls run () method, where the run () calls the method p (), which in turns calls the method q().method q() internally calls the System.out.println(). For instance when System.out.println () is executed, many internal API callings will happen like PrintWriter called by String Buffer. However these are not considered as external or internal method calls. VII TOOLS FOR SOFTWARE METRICS Fig 5 Process for Dynamic Metrics A) E-MTRACE The process of evaluating the Polymorphism metric is the comparison held in same domain of Polymorphism with two software applications, where it is manually determined by the source code to examine their reusability and reuse issues of performance. In order to avoid or remove unnecessary values like system method calling that is invoked by JVM there is a need of filter. The data which is filtered is examined to recognize non-polymorphic and also polymorphic dispatches from where the values are derived. The tool that is implemented to capture all this related information is known as E-MTRACE [1].This tool uses JVMTI, JVM Tool Interface that is examined and control s the process of execution running in the JVM (Java Virtual Machine). File Hook is used to insert the byte codes into methods.JVM is interrupted by the File Hook, when the JVM is loading java .class file to heap. The File Hook inserts the profiling instrumentation code and interprets .class file before calling any method. During runtime the instrumentation code can be inserted. The instrumentation code is executed while calling a method. The program stack is used by the instrumentation code to recognize the Dispatch class. Then to trace the Declared interface it uses the Local Variable Table. EMTRACE Analyzer tool would be processed, once the Dispatched class and the Declared Interface are identified, which evaluate the list of non polymorphic and polymorphic dispatches results. B) Resharper and CodeRush. Resharper and CodeRush are the tools available to extend the native functionality of Microsoft Visual Studio latest versions These tools executes the static analysis of the code like error detection without compiling the code. These tools provides enhanced features like error correction, code generation, syntax highlighting, optimization, formatting and many other features. VIII CONCLUSION Considering quality factor, it is not an easy task to examine and to avoid the complexity, several properties mainly Polymorphism factor in Object oriented design has proposed and paid special attention by Application developers and Software Engineering Society. This Paper mainly discussed on Polymorphism metrics related to Static and Dynamic behavior with respect to Object Oriented Analysis [2] and focused on the Comparison among Chidamber Kemmerer metrics, MOOD(Metrics for Object Oriented Design) metrics and Polymorphism metrics. It is noted that (NOC-SPD) (DIT-SPA) from Chidamber Kemmerer and Polymorphism metrics are highly correlated [2]. In addition to that this Paper also discusses about tools like E-MTRACE which deals with Polymorphic behavior. E-MTRACE is a tool that is developed to measure java applications [1]. This Polymorphic behavior metric is used to examine the interface that has most polymorphic dispatches. So these interfaces may introduce the basis for new technology frameworks and APIs. Moreover, it is necessary to note that the metrics and measures capture the valuable related data from the starting phases of the product development lifecycle giving Engineers a chance to evaluate early fault prone area and advancing the quality factor and to maintain the continuing capability of Software product.
Friday, January 17, 2020
Introduction Internet Protocol Suite Essay
The Internet protocol suite is the set of communications protocols used for the Internet and similar networks, and generally the most popularprotocol stack for wide area networks. It is commonly known as TCP/IP, because of its most important protocols: Transmission Control Protocol(TCP) and Internet Protocol (IP), which were the first networking protocols defined in this standard. It is occasionally known as the DoD model due to the foundational influence of the ARPANET in the 1970s (operated by DARPA, an agency of the United States Department of Defense). TCP/IP provides end-to-end connectivity specifying how data should be formatted, addressed, transmitted, routed and received at the destination. It has four abstraction layers, each with its own protocols. From lowest to highest, the layers are: The link layer (commonly Ethernet) contains communication technologies for a local network. The internet layer (IP) connects local networks, thus establishing internetworking. The transport layer (TCP) handles host-to-host communication. See more: introduction paragraph example The application layer (for example HTTP) contains all protocols for specific data communications services on a process-to-process level (for example how a web browser communicates with a web server). The TCP/IP model and related protocols are maintained by the Internet Engineering Task Force (IETF). SRI First Internetworked Connection diagram Layers in the Internet protocol suite Two Internet hosts connected via two routers and the corresponding layers used at each hop. The application on each host executes read and write operations as if the processes were directly connected to each other by some kind of data pipe. Every other detail of the communication is hidden from each process. The underlying mechanisms that transmit data between the host computers are located in the lower protocol layers. Encapsulation of application data descending through the layers described in RFC 1122 The Internet protocol suite uses encapsulation to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers, being further encapsulated at each level. The ââ¬Å"layersâ⬠of the protocol suite near the top are logically closer to the user application, while those near the bottom are logically closer to the physical transmission of the data. Viewing layers as providing or consuming a service is a method ofabstraction to isolate upper layer protocols from the nitty-gritty detail of transmitting bits over, for example, Ethernet and collision detection, while the lower layers avoid having to know the details of each and every application and its protocol. Even when the layers are examined, the assorted architectural documentsââ¬âthere is no single architectural model such as ISO 7498, the Open Systems Interconnection (OSI) modelââ¬âhave fewer and less rigidly defined layers than the OSI model, and thus provide an easier fit for real-world protocols. In point of fact, one frequently referenced document, RFC 1958, does not contain a stack of layers. The lack of emphasis on layering is a strong difference between the IETF and OSI approaches. It only refers to the existence of the ââ¬Å"internetworking layerâ⬠and generally to ââ¬Å"upper layersâ⬠; this document was intended as a 1996 ââ¬Å"snapshotâ⬠of the architecture: ââ¬Å"The Internet and its architecture have grown in evolutionary fashion from modest beginnings, rather than from a Grand Plan. While this process of evolution is one of the main reasons for the technologyââ¬â¢s success, it nevertheless seems useful to record a snapshot of the current principles of the Internet architecture. RFC 1122, entitled Host Requirements, is structured in paragraphs referring to layers, but the document refers to many other architectural principles not emphasizing layering. It loosely defines a four-layer model, with the layers having names, not numbers, as follows: â⬠¢Application layer (process-to-process): This is the scope within which applications create user data and communicate this data to other processes or applications on another or the same host. The communications partners are often called peers. This is where the ââ¬Å"higher levelâ⬠protocols such as SMTP, FTP, SSH, HTTP, etc. operate. â⬠¢Transport layer (host-to-host): The transport layer constitutes the networking regime between two network hosts, either on the local network or on remote networks separated by routers. The transport layer provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. This is where flow-control, error-correction, and connection protocols exist, such as TCP. This layer deals with opening and maintaining connections between Internet hosts. Internet layer (internetworking): The internet layer has the task of exchanging datagrams across network boundaries. It is therefore also referred to as the layer that establishes internetworking, indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next IP router that has the connectivity to a network closer to the final data destination. Link layer: This layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer describes the protocols used to describe the local network topology and the interfaces needed to effect transmission of Internet layer datagrams to next-neighbor hosts. (cf. the OSI data link layer). The Internet protocol suite and the layered protocol stack design were in use before the OSI model was established. Since then, the TCP/IP model has been compared with the OSI model in books and classrooms, which often results in confusion because the two models use different assumptions, including about the relative importance of strict layering. This abstraction also allows upper layers to provide services that the lower layers cannot, or choose not, to provide. Again, the original OSI model was extended to include connectionless services (OSIRM CL). For example, IP is not designed to be reliable and is a best effort delivery protocol. This means that all transport layer implementations must choose whether or not to provide reliability and to what degree. UDP provides data integrity (via a checksum) but does not guarantee delivery; TCP provides both data integrity and delivery guarantee (by retransmitting until the receiver acknowledges the reception of the packet). This model lacks the formalism of the OSI model and associated documents, but the IETF does not use a formal model and does not consider this a limitation, as in the comment by David D. Clark, ââ¬Å"We reject: kings, presidents and voting. We believe in: rough consensus and running code. â⬠Criticisms of this model, which have been made with respect to the OSI model, often do not consider ISOââ¬â¢s later extensions to that model. 1. For multiaccess links with their own addressing systems (e. g. Ethernet) an address mapping protocol is needed. Such protocols can be considered to be below IP but above the existing link system. While the IETF does not use the terminology, this is a subnetwork dependent convergence facility according to an extension to the OSI model, the internal organization of the network layer (IONL). . ICMP & IGMP operate on top of IP but do not transport data like UDP or TCP. Again, this functionality exists as layer management extensions to the OSI model, in its Management Framework (OSIRM MF) . 3. The SSL/TLS library operates above the transport layer (uses TCP) but below application protocols. Again, there was no intention, on the part of the designers of these protocols, to comply with OSI architecture. 4. The link is treated like a black box here. This is fine for discussing IP (since the whole point of IP is it will run over virtually anything). The IETF explicitly does not intend to discuss transmission systems, which is a less academic but practical alternative to the OSI model. The following is a description of each layer in the TCP/IP networking model starting from the lowest level. Link layer The link layer is the networking scope of the local network connection to which a host is attached. This regime is called the link in Internet literature. This is the lowest component layer of the Internet protocols, as TCP/IP is designed to be hardware independent. As a result TCP/IP is able to be implemented on top of virtually any hardware networking technology. The link layer is used to move packets between the Internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on a given link can be controlled both in the software device driver for the network card, as well as on firmware or specialized chipsets. These will perform data link functions such as adding a packet header to prepare it for transmission, then actually transmit the frame over a physical medium. The TCP/IP model includes specifications of translating the network addressing methods used in the Internet Protocol to data link addressing, such as Media Access Control (MAC), however all other aspects below that level are implicitly assumed to exist in the link layer, but are not explicitly defined. This is also the layer where packets may be selected to be sent over a virtual private network or other networking tunnel. In this scenario, the link layer data may be considered application data which traverses another instantiation of the IP stack for transmission or reception over another IP connection. Such a connection, or virtual link, may be established with a transport protocol or even an application scope protocol that serves as a tunnel in the link layer of the protocol stack. Thus, the TCP/IP model does not dictate a strict hierarchical encapsulation sequence. Internet layer The internet layer has the responsibility of sending packets across potentially multiple networks. Internetworking requires sending data from the source network to the destination network. This process is called routing In the Internet protocol suite, the Internet Protocol performs two basic functions: â⬠¢Host addressing and identification: This is accomplished with a hierarchical addressing system (see IP address). â⬠¢Packet routing: This is the basic task of sending packets of data (datagrams) from source to destination by sending them to the next network node (router) closer to the final destination. The internet layer is not only agnostic of application data structures at the transport layer, but it also does not distinguish between operation of the various transport layer protocols. So, IP can carry data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively. Some of the protocols carried by IP, such as ICMP (used to transmit diagnostic information about IP transmission) and IGMP (used to manage IP Multicast data) are layered on top of IP but perform internetworking functions. This illustrates the differences in the architecture of the TCP/IP stack of the Internet and the OSI model. The internet layer only provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding the transport layer datagrams to an appropriate next-hop router for further relaying to its destination. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet. The Internet Protocol is the rincipal component of the internet layer, and it defines two addressing systems to identify network hosts computers, and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated by the standardization of Internet Protoc ol version 6 (IPv6) in 1998, and beginning production implementations in approximately 2006.
Thursday, January 9, 2020
Taking a Look at Greece - 531 Words
Research Paper Greece Greece is a thriving country, but if it wasnââ¬â¢t for its beautiful islands, seas, and mountains, Greece would not be as strong of a country today. All these factors have helped Greece grow as a country. Throughout history, various mountains such as Mount Olympus and seas such as the Mediterranean have played an important role in the development of Greece history and culture. Greece is a country that is surrounded by mostly water, and the sea has played an important role in its history ( Ancient Greek Colonization..â⬠). The ancient Greeks were often known to be called ââ¬Å"seafarersâ⬠looking for opportunities for trade and founding new coastal sites along the Mediterranean sea. Trading stations were the furthest outposts of Greek culture. At these trading stations, Greek goods, such as bronze, silver, olive oil, wine and pottery were exchanged for more luxurious items (ââ¬Å"Ancient Greek Colonization..â⬠). Also, well-established maritime routes around the Mediterranean sea enabled foreigners to travel to Greece. After the military campaign of Alexander the Great, a lot more intense trade routes were opened across Asia. These trade routes extended as far as Afghanistan and the Indus River Valley (ââ¬Å"Ancient Greek Colonization..â⬠). Not only did these trade routes help with trading goods, it always helped in introducing Greece t o new cultures and in spreading Greek culture throughout Europe.The spread of these cultures can be known as cultural diffusionShow MoreRelatedAnalysis Of Michael Lewiss Liar Poker 1467 Words à |à 6 Pagesfascinating for those readers who rarely read the business pages or watch financial news channels. His familiarity with finance combined with a talent of a travel writer helps Mr. Lewis to give his readers a guided tour to the new third world: Iceland, Greece, Ireland, Germany, and California. At that time the book published these places have been declared bankrupt or put themselves in so much financial trouble that the bankruptcy was just around the corner. For example, Iceland, it has already defaultedRead MoreThe Future Of The Greek Economy1478 Words à |à 6 PagesThese are strong words from a man who once oversaw the financial direction of the Greek economy. During July 2015 the German parliament opted to approve additional negotiations after recent failures, with it proving to be a move that may have granted Greece a slim lifeline. The new bailout is set to top â⠬86 billion, but it is only being offered in exchange for strict austerity measures. Following his resignation, Mr Varoufakis said This programme is going to fail whoever undertakes its implementationâ⬠Read MoreGreek Mythology Of Ancient Greece1551 Words à |à 7 Pagesmany wonders Ancient Greece had to offer. However, one of the most memorable creations of Ancient Greece would have to be Greek mythology. These myths included hundreds of stories and teachings that would have a lasting effect on Greek culture for centuries to come. When taking a deeper look into some of these myths, one may notice gender and sexual behavior to be themes that occur quite frequently. One also cannot help but notice that some social conditions in Ancient Greece, such as the importanceRead MoreThe Tension Between Germany And Greece1519 Words à |à 7 PagesThe tension between Germany and Greece has been ongoing for years. Both countries continuously blame each other for issues that are going on within the EU. Germany views its self as the country that has worked hard and sacrificed a lot to become a successful country, but on the other hand there is Greece who is viewed as lazy and irresponsible which has landed them in debt. One of the biggest issues that Germany and Greece both have with each other is the European Union Debt crisis, and who is toRead MoreMourners on Greek Vases: Remarks on the Social History of Women680 Words à |à 3 PagesIn Christine Mitchell Havelockââ¬â¢s article, Mourners on Greek Vases: Remarks on the Social History of Women, Havelock describes the role of women in ancient Greece as being secondary, oppressed, restricted, disregarded and without identity. The question regarding womenââ¬â¢s role in art within this time period is one that is new to us. Only recently has our focus been drawn toward the female gender and their role within these works. With the use of ancient vases depicting funeral scenes as visual aidsRead MoreSociety Vs Ancient Greece Society1125 Words à |à 5 PagesAncient Greece was a polytheistic society with a varied life reflecting their beliefs, culture, and society and differed greatly from that of the modern world. They believed very strongly in Olympian Gods and had many worship habits. Those two things are what mostly made Ancient Greece. In modern day Greece, the area is about 51,000 square miles and islands make up about 20 percent of the total area. Mostly all islands are in mountain ranges. There are about 6000 islands in Greece but only 30 percentRead MoreTo Helen Edgar Allan Poe Analysis771 Words à |à 4 PagesNicà ©an being the wood, ââ¬Å"barksâ⬠meaning the ship. This can be backed up when line 3-5 talks of the sea, and shores. These Shores seem to be the shores of Greece, or maybe Rome. Lines 3-5 speak of some weary person returning for the shore they left from. Within this context, it seems that the sight of a ship returning amongst the beautiful horizon of Greece or Rome, after being gone a long time is pretty to Edgar Allan Poe. Line five says the word ââ¬Å"bore.â⬠Poe could be telling us that the boat was bearingRead MoreGreek Government s Debt Has Been Around Since 20101346 Words à |à 6 PagesGreece governmentââ¬â¢s debt has been around since 2010. The countries surrounding Greece are now worried that it may affect them. The economy in Greece started getting worse after United Stated had its crisis in 2007. Since Greece entered the Eurozone changes in the economy, financial stability, and employment had caused Greece to go into more debt, but it could have been avoided if Greece would have not entered the Eurozone. There are several events that led to Greece being bankrupt, but for a betterRead MoreGreece : A Debt Of More Than 350 Billion Euros1515 Words à |à 7 PagesRoad Ahead Greece has a debt of more than 350 Billion Euros or close to 175% of its GDP. Its annual interest obligation is close to 23 Billion Euros. Unemployment is more than 25% and its annual GDP is declining by 2% per year. Greece is clearly in a grave crisis situation which is extremely hard to overcome. On June 30th, it became the first developed country to default to make an IMF loan repayment. It is in an urgent need of funds to make another loan repayment to European Central Bank on 20thRead MoreCauses of the Greek liquidity crisis; how conditions were before the crisis Events that happened1600 Words à |à 7 Pageseconomic support weakened ââ¬âwhen Greece entered the euro zone in 2001 the convergence criteria which supposed to provide sound financial systems within the economy and the GSP were established to prevent financial and economic crises. Greece entered the euro zone without meeting the requirements of the convergence criteria, how this might have happened is due structural failures or a political failure of European leaders, this put Greece already at a disadvantage plus Greece was already in debt so this
Wednesday, January 1, 2020
Intercepting Keyboard Input With Delphi
Consider for a moment creation of some fast arcade game. All the graphics are displayed, lets say, in a TPainBox. TPaintBox is unable to receive the input focus ââ¬â no events are fired when the user presses a key; we cannot intercept cursor keys to move our battleship. Delphi help! Intercept Keyboard Input Most Delphi applications typically handle user input through specific event handlers, those that enable us to capture user keystrokes and process mouse movement. We know that focus is the ability to receive user input through the mouse or keyboard. Only the object that has the focus can receive a keyboard event. Some controls, such as TImage, TPaintBox, TPanel, and TLabel cannot receive focus. The primary purpose of most graphic controls is to display text or graphics. If we want to intercept keyboard input for controls that cannot receive the input focus well have to deal with Windows API, hooks, callbacks and messages. Windows Hooks Technically, a hook function is a callback function that can be inserted in the Windows message system so an application can access the message stream before other processing of the message takes place. Among many types of windows hooks, a keyboard hook is called whenever the application calls the GetMessage() or PeekMessage() function and there is a WM_KEYUP or WM_KEYDOWN keyboard message to process. To create a keyboard hook that intercepts all keyboard input directed to a given thread, we need to call SetWindowsHookEx API function. The routines that receive the keyboard events are application-defined callback functions called hook functions (KeyboardHookProc). Windows calls your hook function for each keystroke message (key up and key down) before the message is placed in the applications message queue. The hook function can process, change or discard keystrokes. Hooks can be local or global. The return value of SetWindowsHookEx is a handle to the hook just installed. Before terminating, an application must call the UnhookWindowsHookEx function to free system resources associated with the hook. Keyboard Hook Example As a demonstration of keyboard hooks, well create a project with graphical control that can receive key presses. TImage is derived from TGraphicControl, it can be used as a drawing surface for our hypothetical battle game. Since TImage is unable to receive keyboard presses through standard keyboard events well create a hook function that intercepts all keyboard input directed to our drawing surface. TImage Processing Keyboard Events Start new Delphi Project and place one Image component on a form. Set Image1.Align property to alClient. Thats it for the visual part, now we have to do some coding. First, well need some global variables: var à à Form1: TForm1; à à KBHook: HHook; {this intercepts keyboard input} à à cx, cy : integer; {track battle ships position} à à {callbacks declaration} à à function KeyboardHookProc(Code: Integer; WordParam: Word; LongParam: LongInt): LongInt; stdcall; implementation ... To install a hook, we call SetWindowsHookEx in the OnCreate event of a form. procedure TForm1.FormCreate(Sender: TObject) ; begin à {Set the keyboard hook so we à can intercept keyboard input} à KBHook:SetWindowsHookEx(WH_KEYBOARD, à à à à à à à à à à à {callback } KeyboardHookProc, à à à à à à à à à à à à à à à à à à à à à à à à à à HInstance, à à à à à à à à à à à à à à à à à à à à à à à à à à GetCurrentThreadId()) ; à {place the battle ship in à the middle of the screen} à cx : Image1.ClientWidth div 2; à cy : Image1.ClientHeight div 2; à Image1.Canvas.PenPos : Point(cx,cy) ; end; To free system resources associated with the hook, we must call the UnhookWindowsHookEx function in the OnDestroy event: procedure TForm1.FormDestroy(Sender: TObject) ; begin à à {unhook the keyboard interception} à à UnHookWindowsHookEx(KBHook) ; end; The most important part of this project is the KeyboardHookProc callback procedure used to process keystrokes. function KeyboardHookProc(Code: Integer; WordParam: Word; LongParam: LongInt) : LongInt; begin à case WordParam of à à vk_Space: {erase battle ships path} à à à begin à à à à with Form1.Image1.Canvas do à à à à begin à à à à à Brush.Color : clWhite; à à à à à Brush.Style : bsSolid; à à à à à Fillrect(Form1.Image1.ClientRect) ; à à à à end; à à à end; à à vk_Right: cx : cx1; à à vk_Left: cx : cx-1; à à vk_Up: cy : cy-1; à à vk_Down: cy : cy1; à end; {case} à If cx 2 then cx : Form1.Image1.ClientWidth-2; à If cx Form1.Image1.ClientWidth -2 then cx : 2; à If cy 2 then cy : Form1.Image1.ClientHeight -2 ; à If cy Form1.Image1.ClientHeight-2 then cy : 2; à with Form1.Image1.Canvas do à begin à à Pen.Color : clRed; à à Brush.Color : clYellow; à à TextOut(0,0,Format(%d, %d,[cx,cy])) ; à à Rectangle(cx-2, cy-2, cx2,cy2) ; à end; à Result:0; {To prevent Windows from passing the keystrokes à to the target window, the Result value must à be a nonzero value.} end; Thats it. We now have the ultimate keyboard processing code. Note just one thing: this code is in no way restricted to be used only with TImage. The KeyboardHookProc function serves as a general KeyPreview KeyProcess mechanism.
Subscribe to:
Comments (Atom)