Friday, June 7, 2019

Microbiology Module Essay Example for Free

Microbiology Module Essay1.) This article was published March-April, 2001.2.) The two main type are resident flora and transient flora.3.) Hand hygiene is used to prevent the colonization of transient flora. It includes hand washing and disinfection. Hand washing refers to washing hands with an unmedicated detergent and urine or water alone. Its objective is to prevent cross-transmission by removing dirt and loose transient flora.4.) Hand disinfection refers to use of an antiseptic solution to clean hands, either medicated cleanse or alcohol.5.) Alcohol is the agent that has excellent activity. 6.) Propanol is the most effective alcohol and ethanol the least.7.) In several hours resident flora are all in all restored.8.) The type and duration of patient care influenced the amount of bacteria found on the caregivers hands.9.) The factors for non deference include insufficient numbers of sinks low risk for acquiring infection from patients belief that glove use obviates need for hand hygiene and ignorance of or disagreement with guidelines and protocols. People also incur skin irritation.10.)Reasons reported by health-care workers for the wishing of adherence with recommendations include skin irritation, inaccessible supplies, interference with worker-patient relation, patient needs perceived as priority, wearing gloves, forgetfulness, ignorance of guide-lines, insufficient time, high workload and understaffing, and lack of scientific information demonstrating impact of improved hand hygiene on hospital infection rates11.)Lack of knowledge on the topic is the key barrier.12.) Highest compliance was ICU, and the lowest compliance was open ward.13.) Hospitals should provide their workers with alcohol based hand rubs, and free lotions and antiseptics. They should also educate and stress the importance of hand hygiene to their workers. Hospitals should cook up a framework and make good hand hygiene a part of the culture of their hospital

Thursday, June 6, 2019

The filament bulb obeys ohms law Essay Example for Free

The filament bulb obeys ohms law EssayI think this was caused by air already in the metro organism pushed out. To vote out this I could score measured how very much O2 started in the subway and so subtracted that from my 1st measurement. My experiment was good because it was repeated enough times, three times, so that any anomalous results could be clearly seen next to a best-fit curve. Also exclusively of my results had a best-fit curve and the values increased throughout, backing up my prediction that as the substrate concentration increased so would the initial rate of reaction. Using a bar cylinder rather than a petrol syringe to squirrel away the O2 is better because gas syringes, although easier to use, do not always propel with ease when oxygen moves in. In my experiment the oxygen bubbles could be clearly seen in the water supply inside the measuring cylinder and had no trouble reaching the cylinder. Limitation How does this affect accuracy and/or reliability ? Importance? Why? Modifications O2 escaping out-of-pocket to tubes in bung. If O2 escaped then the volume of O2 collected go forth be wrong and therefore the result could not be reliable.This is real important as if gas was escaping then it would not drive got into the tube, therefore affecting the amount of O2 collected in the experiment. However, as the same equipment was used throughout this is not a very important factor as it would have been the same for all of the experiments. Use Vaseline around tubes to hold the line O2 escaping and look for any gas escaping through holes in the tube that is in the water. This would stop O2 escaping merely wouldnt really transform the reliability too much, just the accuracy of the result. Surface area of yeast not being similar.This is a variable and therefore not keeping it the same means two things are being investigated at the same time, and therefore this would mean that the results gathered do have some inaccuracies and can not be reliable. This is the most important factor because a larger surface area means that there will be more than to react with. If there were a very small surface area the reaction would be slow, as there is not much for the substrate to react with. By crushing the yeast up with a pestle and mortar the surface areas will all be the same but this would speed up the reactions dramatically as it would give the maximum surface area.This would have made the results a lot more reliable as they all would have begun with the same surface area. Test tube containing O2 before H2O2 was added. This means that the first measurement could be quite high, when there is little activity, as source being pushed in it pushes oxygen out through the tube. This is important as it explains the 1st result being much faster than the 2nd throughout the 5 experiments. However, it is the same for all of the experiments so it wouldnt make a big difference in the comparison of my results. Making a vacuum around the experiment would stop O2 getting into the tube.An easier alternative would be to measure O2 in tube before and then subtract that number from my 1st measurement. Although this would increase accuracy it would not alter the reliability, as the amount of O2 in the tube is the same each time. Obstruction in the tube This would slow or stop the movement of O2 through to the measuring cylinder. If there was a block then it would cause the results to be much lower than they should be, with a much slower initial rate of reaction. This is because less O2 is being measured as less would get to the measuring cylinder.By rinsing out the tube before each experiment any obstructions can be removed. If there were an obstruction then doing this would make the results more reliable and much more accurate. The results that I gathered, in my opinion, are not all reliable. This is mainly due to the wide range of results gathered in my 5ml H2O2, the final measurements being 45cm3, 93cm3 and 92cm3. Also, my 2ml H2O2 experiment ended up with a higher initial rate of reaction and more O2 collected than the 3ml H2O2 and the 4ml H2O2 experiments. Repeating the experiment 3 times and then taking an average helps to hide these punic results.Another reason why my results are unreliable is that the surface area was not the same each time. If the yeast in one experiment had a much higher surface area then it was going to have a much faster initial rate of reaction than an experiment where yeast had a small surface area. This is likely to be why my 2ml H2O2 experiment came out higher than my 3ml and 4ml H2O2 experiments On my graphs I have circled what I think are anomalous results. My first anomalies occur on my 2ml H2O2 graph. amidst 40seconds and 60seconds the O2 collected is 14. 3cm3, 17. 7cm3 and 21. 7cm3.I think that, although the graph on the whole is unreliable, these are anomalous because they do not fit the best-fit curve. On the 3ml H2O2 graphs I have circled two razes as these points dip below the best fit curve and then back up again. At 70seconds and 80seconds the O2 collected is 20. 7cm3 and 22. 7cm3. A possible reason for this could have been that the tube might have been blocked, maybe by the way that the measuring cylinder was held. It might have been different if the measuring cylinder was clamped so it couldnt move and therefore couldnt squash the tube.By holding the measuring cylinder it was possible that it may have been pressed down on the tube briefly. This would of held the O2 in the tube and then when it was released the O2 would have all come out at once, resulting in the points moving back to the best-fit line. On the 5ml H2O2 graph I have circled one point. This point is after 30seconds and misses the best-fit curve by about 4cm3 it has 30cm3 whereas the curve crosses 30seconds at 34cm3. The reason for this anomaly could have been the same as above or possible because of a reading inaccuracy.Also, when holding the measuring cylinde r, it was not always held perfectly upright, and therefore could have given a false reading but this is likely to have been the same throughout the experiment.Bibliography These are the books from which I gathered my information and used to make my prediction Indge, Rowland, Baker, (2000) A New Introduction to Biology (Hodder Stroughton) Jones, Forsbery and Taylor (2000) Biology 1 (Cambridge University Press) Toole, Glenn and Susan (1999) Understanding Biology, Fourth Edition (Stanley Thorne Ltd).

Wednesday, June 5, 2019

The financing of the UK healthcare system

The financing of the UK healthc atomic number 18 systemSince the recession, the UK debt and deficit has been at an all time high, where by the discontinue of 2009 UK debt was reported to be 950.4 billion, equivalent 68.1 gross domestic product (gross domestic product) and the deficit was 159.2 billion, which equated to 11.4% GDP (Figure 1).1 With that in mind it is a detail that all public sectors forget be facing spending cuts to reduce the g everywherenments debt and deficit. Since the NHS receives its funding from the government, it is logical that it will face spending cuts too. Therefore, it is signifi substructuretly primal to use frugal science as maven of the determinants in the allocation of already hold healthc ar resourcefulnesss.Figure 1. Shows the UK government debt and deficit as percent succession of GDP, from 2006 until the end of 2009.1Economics is concerned with efficiently allocating the limited available resources, surrounded by alternative uses, to ach ieve maximum effectiveness.2 There is an ever increasing number of different technologies and medical interventions that cannot all be used to brood illnesses. The limited resources in the healthcare services, agent decisions on resource allocation have to be made carefully so that maximum effectiveness can be achieved. In order to efficiently allocate resources, one has to consider the economic paygrade of the different alternatives forward implementing the one that is the most effective and cost-effective.3 health economics is used to improve states health, which is how it differs from normal economics, in that it is not about analysing consumers demand and supply, exclusively analysing benefits of medical interventions in relation to their cost. In health economics it is excessively to a greater extent difficult to measure health outcomes in comparison to financial outcomes in financial economics. Outcomes of healthcare interventions are usually measured in quality adjuste d life years (QALY).3Patterns of financing healthcareThere are two methods of financing healthcare, which are public financing and private financing.4 man financing of healthcare raises bang-up through taxation of the public (Table 1). The NHS is funded mainly through public financing. Private healthcare is where the slap-up is raised through the patients use the health services. The patients either pay themselves or are usually insured, so the insurance company pays their healthcare bills (Table 2). The healthcare system in the USA raises capital through private financing.5Table 1. Describes the different methods and sources of public financing in healthcareSources of Public Financing commentary of FinancingGeneral Tax Revenuese.g. UK, Italy, brisk ZealandFinance is raised by taxation the cost of raising bills is lowGeneral taxation pays all the bills so patients do not utter cost per capitaTwo types of general taxationRegressive Falling more on the poor than cryptical peo ple Includes tax on items such as tobacco, alcohol and recreational events etc.Progressive Falling more on the rich than poor people Includes tax on luxury products purchased by the richDeficit FinancingRaised by, issuing bonds with long term low interest repayments and bilateral or multilateral aid loansBorrowing and spending funds that are repaid over a period of timeDeficit financing supplements general tax revenueIt is used on the development and expansion in healthcare infrastructureEarmarked TaxesTax on a particular product such as lottery and gambling for particular services such as healthcareSocial Insurancee.g. France, Germany and AustriaThe state acts as insurerFinanced by employer and employee payroll deductionSocial insurance is based upon collective risk of insurance groupGovernment might also contribute to social insurancePublic Healthcare Insurancee.g. Canada, Taiwan and South KoreaUses private sector come throughrs but payment made by government run insurance weapo ns platformmes.Capital expenditure are financed from tax revenuesIt is cheaper and much simpler to administrate than the American for-profit insurance.Wealth is transferred whole from low to high risk groups, not from those with high income to low incomesTable 2. Describes the different methods and sources of private financing in healthcareSources of Private FinancingDescriptionPrivate Health InsuranceSocial device in which a group of individuals transfer risk to another party in order to intensify loss experienceby Risk PoolingRisk FundingSystem of third party payments has the effect ofincreasing demandIncreasing of pricesInefficient allocation of resourcesEmployer Financed SchemesEmployers at one time finance healthcare for their employees focusing on accident prevention and occupational health.They pay for private sector health servicesEmploy medical personnel directlyProvide necessary facilities and equipmentEmployees families are also covered.Community FinancingIt is volunt ary in its naturePayment for healthcare is made by members of the communityResources are controlled directly by the communityDirect Household ExpenditureHealth expenditure constitutes a large share of GDP throughPeople purchase more health servicesPeople buying higher quality health servicesGovernment services charge fees from usersRaises household costs create inequityA study produced by the world health organisation concluded that in healthcare services that were publicly funded, the expenditure was lower. This was as a percentage of GDP and per capita. It also concluded that the population as a whole gained better health outcomes, universal standards were in place and costs of treating illnesses were bring down by sum up emphasis on preventative primary care.6Healthcare systems in UK and USAIn the UK, the National Health Service (NHS) was developed in 1948, where for the whole population healthcare was free and it is paid for by taxation, which means people would pay for it a ccording to their means, not their needs.7The NHS is wholly funded by the government, through versatile methods such as taxation and national health insurance (Table 1). Only 1.3% of the total NHS expenditure is provided through charging patients, the other 98.7% is funded by the government, where 90.3% of that comes from taxation and 8.4% comes from national insurance.8 In the UK, only 11.5% of the population purchase supplementary private health insurance, whereas in the USA over 67% of the population have health insurance.9 10In the USA the healthcare system is not funded by the government but rather by public and private health insurances. Private insurance which is in general employment based, funds 67.5% of the healthcare budget and the rest is funded by public health insurance. The healthcare system in the USA is funded by the demand for effective health, whereas the NHS is funded by the supply of healthcare. There are various programmes of public health insurance that are used to fund healthcare in the USA. These programmes include medicaid which helps the poor, medicare which helps the senior(a) and the disabled, state children health insurance plan which aims to help poor children and finally other plans such as those that are offered to the military. Although these public health insurances are in place to provide help to the poor, elderly and disabled, 45.7% of Americans do not have health insurance.10The differences between the healthcare systems in the USA and the UK also differ in terms of health outcomes, availability and costs. In 2009 the total health expenditure in the USA was 15.7% of GDP in comparison to only 8.4% of GDP in the UK. Tables 3, 4 and 5 are demonstrate the differences between the two healthcare systems.11 Also, even though the USA has much higher health expenditure than the UK it static has a lower life expectancy at birth (78.8 years) compared to the UK (79.5).Table 3. Compares the healthcare expenditure of the USA and th e UK healthcare systems in 2007.11IndicatorsUKUSATotal expenditure on health, % GDP8.416Total expenditure on health, Per capita US$ PPP29927290Public expenditure on health, % total expenditure on health81.745.4Public health expenditure per capita, US$ PPP24463307Out-of-pocket expenditure on health, % of total expenditure on health11.412.2Out-of-pocket expenditure on health, US$ PPP343890Table 4. Compares the healthcare resources of the UK and USA healthcare systems.11IndicatorsYearUKUSAPractising physicians, tightness per 1,000 population20072.52.4Practising nurses, density per 1,000 population20071010.6Medical graduates, density per 1 000 practising physicians200637.726infirmary beds, density per 1,000 population20073.43.1Acute care beds, density per 1,000 population20062.82.7Psychiatric care beds, density per 1,000 population20060.70.3MRI units per trillion population2007(e)8.225.9CT Scanners per million population2006(e) 7.632Table 5. Compare health and disease in between the UK and the USA.Indicators of HealthUKUSALife prevision at Birth (years)79.578.8Mortality Rate Under 5 (per 1000)5.77.8Maternal Mortality (per 1000)811DiseaseDiabetes Hospital Discharges per 100,00072197.9Cancer Hospital Discharges per 100,000994563Acute Myocardial Hospital Discharges per 100,000153277The comparisons above show that increasing funding does not mean that the quality of health would improve. The USA spends much more capital on healthcare than the UK, but they still have a higher mortality rate for children under the age of 5. The table above demonstrate the fact that in NHS, the funds received are spent much more effectively than the healthcare system in the USA, showing that more effective resource allocation decisions are made and hence better health outcomes are achieved. Also due to the lack of health coverage in the USA, around 45,000 people are killed e actually year.12 Such figures do not exist in the NHS as healthcare services in the UK are free for everyone.Oth er means of showing how the NHS is better than the health service in the USA, is that in the UK, patients are treated in accordance to their illnesses regardless(prenominal) of their social class, whereas in the USA more income means better treatment, which of course only benefits the rich. Also administration charges in health services in the USA which are publicly funded such as medicare and madicaid cost much more than the services in the NHS making it less readily available to all the poor, elderly or disabled.The importance of application of economic evaluation in the NHS, to provide decision makers with robust information to guide resource allocation decisions.The definition of economic evaluation is that it is a comparative analytic thinking of two or more courses of action in terms of both their costs and consequences.13 Hence in healthcare it can be thought of as a framework to assess the benefits and costs of each alternative method of healthcare intervention. The limited resources such as people, equipment and facilities in the healthcare, provide a subservient framework where alternative uses of the available resources can be compared. Economic evaluation in healthcare aims to maximise the outcomes from available resources through aiding resource allocation.13There are three types of economic evaluations. These include cost-effectiveness psychoanalysis (CEA), cost-utility analysis (CUA) and cost-benefit analysis (CBA). Although these terms characterise different types of analysis, they do share some similar components, which include a stated perspective, a comparison group, and evidence of effectiveness, evidence of costs and a method of combining both costs and effects collectively. The differences in the analyses are the shipway used to measure and value health outcomes. When the health outcomes of comparative interventions are established to be the same, then a cost-minimisation analysis (CMA), which is a sub-component of CEA is used, and on ly considers the inputs. This analysis aims to decide which intervention is the cheapest method of attaining the same outcome.13Resource allocation decisions in the NHS are very important because demand for healthcare exceeds the recourses that are available, which gives health administration many challenges to face. Due to the acknowledged resource constrains in the NHS, economic evaluations have become a recognised part of policy making.14 In England, the National be of Health and Clinical Excellence (NICE) is in charge of providing the national counsel for promoting good health and the treatment and prevention of ill health and provides clinical guidance to improve the quality of healthcare.15 In order to do that, the effectiveness and cost-effectiveness of comparative healthcare interventions are required to be considered.There is a large increase in procedures and technologies for the prevention and treatment of diseases. Therefore, there are many alternatives of treatments and prevention of illnesses with variations in efficiencies and quality of care. Rational priorities in healthcare cannot be set for flowing and new resources. Hence, NICE would consider whether the resources available are being used in the best way possible to maximise efficiency. Technology appraisals are tribute by NICE on the use of existing and new treatments and medicines within the NHS, such as surgical procedures, medical devices etc. which the NHS is legally obliged to fund. These very important recommendations, are based on evidence of how well the treatments and medicines work (clinical evidence) and how well they work in relation to their cost (economic evidence), (i.e. does it represent value for bullion?).16Discuss the principles and an appropriate method for conducting an economic evaluation of look cancer covering fireThe mamilla cancer cover programme aims at detecting boob cancer at an early stage in women between the ages of 50-64, who are at a significan tly increased risk of developing the neoplasm.An economic evaluation of the breast cancer screening program would need to compare to cost-effectiveness of the programme and of the treatment that would follow, with the cost-effectiveness of symptomatic detection of breast cancer and the appropriate treatment that would also follow. One would have to calculate the QALY of both the screening program and symptomatic detection, in order to achieve a quantitative measure of the benefits of the two interventions. In order to calculate QALY one would need to work out the quality of life during the disease stage and multiply it by the duration of the disease stage. This would provide a quantitative measure so that two interventions aimed at the same disease can be compared. Then one would need to calculate the costs of each intervention. Both of these would provide the cost effectiveness of each intervention and would show which is more cost-effective.3Evaluate the rationale of the screening programme targeted to women aged between 50 and 64 in the UK.It is established now that breast cancer is the most common type of cancer in the UK, where 45,700 women and 277 men were diagnosed with it in 2007. all over the last 25 years, the incidence of incidence of female breast cancer rose by 50%. It is much more common in women over the age of 50 were 8 out of 10 women diagnosed fall in that age group.1716,000 cases of breast cancer were detected in 2007/2008 through the NHS breast screening programme, and it is estimated that 1,400 lives are saved every year because of this programme. Approximately 2 out of 3 women with breast cancer survive more than 20 years with the disease. Where before 5 out of 10 women survived beyond 5 years now it is 8 out of 10 women. The graph (Figure 2) below illustrates the decreasing mortality of women diagnosed with breast cancer in comparison to the past. The earlier breast cancer is diagnosed the increased chance of survival. Approximately 9 o ut of 10 women diagnosed with stage I breast cancer survive longer than 5 years, whereas only 1 out of 10 women diagnosed with stage IV breast cancer survive beyond 5 years. Although so many lives are saved each year due to the screening programme, there were still 12,116 deaths from breast cancer in 2008 and 99% of these were in women.Therefore, it is crucial to detect breast cancer as early as possible to increase the chances of survival and the quality of life. In addition, detecting breast cancer at an early stage and treating it would be more cost less than the long term treatment of women diagnosed with later stages breast cancer.18The reason the screening program is for women between the ages of 50-64 is that this age group have a much higher incidence of breast cancer in comparison to younger age groups. The intermediate age of menopause is 50 and this is the when the breast become less dense and cancer can be detected much easier. The compliance in the age group of women o ver 64 years old is low therefore it would increase costs and decrease the benefit of the screening program making it less cost effective.Figure 2. Demonstrates the age-standardised (European) mortality rates of breast cancer patients in the UK from 1971 until 2007.ConclusionIn conclusion this report has discussed the different patterns of financing healthcare (Table 1 2). The health system in the USA was compared with the NHS in terms of financing, availability and cost. It was determined that the NHS has a lower health expenditure as percentage of GDP than the USAs health expenditure. However, the effective use of these recourses through guidance provided by NICE after taking into account economic evaluation of the different available resources makes the NHS a better healthcare provider than the USAs healthcare system.The importance of economic evaluations that are used to provide robust information to the NICE committee to aid in policy making decisions that are concerned with t he allocation of the scarce resources of the NHS have been discussed. Also the principles and an appropriate method for conducting an economic evaluation of breast cancer screening was illustrated in this report.Finally, the importance of the breast cancer screening programme for women aged between 50-64 years was examined and the report demonstrates why the screening programme is so important and why this age group has been chosen for screening.

Tuesday, June 4, 2019

Effect Of Vibration On Solder Joint Reliability Engineering Essay

Effect Of oscillation On conjoin Joint reliability Engineering EssayCHAPTER 01INTRODUCTIONSOLDER fit IN ELECTRONIC ASSEMBLIESCircuit boards range from simple star moulded plastic boards with copper upholdors on iodin or both sides to multilayer boards with copper conductors, each layer being sepa roved by a dielectric and interconnected by metal conductors. Minimum form width and spacing between lines is less than 100 m. The board typic all(prenominal)y is do from a composite much(prenominal) as an epoxy with layered sheets of twine fibreglass. The dielectric material between layers of conductors is usually a polymer, for example polyimide. To maintain join ability, the exposed copper may be coated with an inhibitor such as benzotriazole or with a conjoin oercoat. Comp whizznts argon attached to the board with join or metal-filled conductive adhesives. Fully assembled boards may be further protected against moisture, contamination, and mechanically skillful damage by a cover coat.1.2 SOLDER JOINT RELIABILITY AND FAILURESolder joints atomic number 18 widely apply in the electronic promotional material industry to produce good galvanic, thermal, and mechanical connections between the software crossroad and the stigmaed circuit board. Eighty percentage of the mechanical ruin in airborne and automation electronic caused by shiver and shock. Design appropriate measure to ensure the survival equipment in the shock and shiver environment is necessary to do so. Remaining 20 percentage of mechanical loser related to thermal stresses resulting from last thermal gradients, coefficient of thermal expansion and high coefficient of elasticity.Solder joint trouble occur in several reasonsPoor design of the join jointA bad solder joint treatmentSolder materialExcessive stress employ to solder joints.In general, however, the solder joint failure are simply ranked according to the ature of stress that mode number caused. Most joint failure deign int o three major categoriesFatigue failure due to cyclic stress applicationDue to the implementation of a long term or enduring extendThe stress is due to overloading in the short termReflow profile as well has a signifi preservet role on solder joint reliablity. because It also has a high influence micro structure of the solder joint.Vibration failure of solder joints is often assessed for reliability development high accelerated spirit test, which is represented by a GRMS- time curve. For surface bait microelectronic components, an approximation of bell ringered circuit board (PCB) feign analysis discharge be made by assuming PCB as a bare unpopulated thin plate because the increase in stiffness of PCB due to the mounting of the components is approximately stir up by the increase in conglomeration mass of the populated PCB 2. However, this approximation cornerstone lead to errors in natural oftenness prediction for different package profiles, for flip-chip-on-board (FCOB ) and plastic-ball-grid-array (PBGA) assemblies 3,4. When the component has small profile, the approximation of PCB throng as a bare PCB deal provide satis actory modal analysis results because the stiffness and mass function of small component to PCB assembly is non significant.In this study, varying G-level stochastic vibration tests for PCB assembly were conducted. In order to assess the reliability of PCB assembly, it is necessary to conduct the high-octane analysis. A global-local modeling approach 4-6 was used. The analyses by Basaran 7,8, Chandaroy 9 and Zhao et al. 10 show that solder joint deformation is in the elastic range for vibration loading. The global-local or submodeling system 11-13 has been used for the board level FE simulation. In this study, four different model cases were investigated for FEA modal analysis to calculate the first order natural frequency of the FCOB assembly. A quasi-static analysis approach was conducted for the FCOB assembly to evaluat e the stress strain behavior of the solder joints. A harmonic analysis was also investigated to study the dynamic response of the FCOB assembly subjected to vibration load. Fatigue life prediction results from the quasi-static analysis and harmonic analysis approaches were compared to the test results.1.3 PROJECT PURPOSEIn this modern public due to the causes of health and environmental issues the electronic manufacturing industries facing a challenging problem of necessity to produce reliable solder products in precise high density with truly low cost.Solder joints are very important to the reliability of Printed Circuit Boards (PCB). This is a one of the leading agent in infection of electrical and thermal connections. In case of every PCB even a smaller solder joints are very important.So this object investigates the Effect of Vibration on Solder Joint Reliability in Electronics Assembly Applications. Solder joint of a Electronic assembly is very important measurement becau se of This model ground study might help engineers effectively improve the PCB mechanical design and thus improve reliability of electronics attached to the PCB by giveing realistic uncertainties and obstinate vibration environments.CHAPTER 02LITERATURE REVIEW2.1 SINE ON RANDOM VIBRATION TESTINGVibration sine on stochastic testing is performed by superimposing a sine wave on top of a random environment. A sine on random vibration test duplicates the combined environment of a spinning helicopter blade with its distinct resonant levels and the rest of the aircraft which gene judge random engine and aerodynamic induced vibration. Gunfire on board an aircraft causes sine vibration while the rest of the aircraft generates random excitations. These types of tests are duplicating vibrationcharacterized by dominant peaks (sinusoids) superimposed on a broadband backg tumidAnother variation would be a swept sine on random test.2.2 SINUSOIDAL VIBRATION TESTINGDynamic deflections of materia ls caused by vibration can cause a emcee of problems and malfunctions including failed electrical components, deformed seals, optical and mechanical misalignment, cracked or broken structures, excessive electrical noise, electrical shorts, chafed wiring. Because sine vibration is basically a certain fundamental frequency and the harmonics of that fundamental, in its pure state of matter, this type of vibration is generated by a limited but significant number of sources. Expressed as amplitude versus frequency, sine vibration is the type of vibration generated in the field by sources such as engine rotational speeds, propeller and turbine blade transition frequencies, rotor blade passage and launch vehicles.While much of real world vibration is random, sine vibration testing accomplishes several important goals in product qualification and testing. Much material and finished product was modeled on some type of sine vibe signature. A sine knot of frequencies will determine whether the assumptions were good and if the deviations are significant enough to cause design changes. In other words, sweep will establish if the anticipated frequency has been met and/or discovers the test item fundamental frequency. Similarly, a sweep will help identify the test subject resonance frequencies, which may be the points at which the item experiences particularly stressful deflections. By dwelling at those frequencies in subsequent tests, premature failures due to the properties of the material may come to light sooner the items sees field use. Some of the avocation tests include fixed frequency at higher levels of the controlling variable (displacement, velocity, acceleration), and random vibration. Per customer request, NTS will run sweeps in one direction, decreasing, increasing or bi-directionally and can change frequency logarithmically or linearly.Another typical curving vibration test, sine burst such as the teardrop, goes rapidly to peak pulse and then decays at lower rate (to pr grammatical case damage to the unit). The burst test puts a maximum load into an article at a rapid rate and particularly stresses joints and seams to identify workmanship and design issues.2.3RANDOM VIBRATION TESTINGThe legitimacy of random vibration s an effective tool of screening work man ship defects came some during manufacturing. Up until that limited hertz sine was applied during reliability testing. Pure sinusoidal vibration is composed of a single frequency at any given time. Comparisons tests revealed that to equal the effectiveness of random vibration. The test item will have to be subjected to many sine frequencies over a longer period of time, and may unintentionally fatigue the test item. Random vibrations undercover defect faster.2.4 reliable world simulation.Most vibration in real world is random for example a vehicle travelling over road experience random vibration from the road irregularities. A ground launched rocket vehicle experiences non s tationary vibration during its flight the force back ignites the rocket travel through the atmosphere , the motor burn ends and so forth even in wing when subjected to turbulent air flow, undergoes random vibration.Random vibration is composed of multitude of continues spectrum of frequencies. Motion varies arbitrarily with time. It can be presented in the domain by a power spectral density function G2/Hz.HIGHLY-ACCELERATED LIFE TESTING (HALT)Exposes the product to a in stages cycling in environmental variables such as temperature, shock and vibration. HALT involves vibration testing in all three axes employ a random mode of frequencies. Finally, HALT testing can include the simultaneous cycling of multiple environmental variables, for example, temperature cycling plus vibration testing. This multi-variable testing approach provides a closer approximation of real-world operational environments. Unlike conventional testing, the goal of HALT testing is to break the product. When the product fails, the weakest link is identified, so engineers know exactly what ask to be done to improve product quality. After a product has failed, the weak component(s) are upgraded or reinforced. The revised product is then subjected to another round of HALT testing, with the range of temperature, vibration, or shock further increased, so the product fails again. This identifies the next weakest link.By going through several iterations like this, the product can be made quite robust. Withthis informed approach, only the weak spots are identified for improvement. This type of testing provides so much information about the construction and performance of a product, that it can bequite helpful for newer engineers assigned to a product with which they are not completely familiar. HALT testing must be performed during the design phase of a product to make sure the basic design is reliable. But it is important to note that the units being tested are promising to be hand-made eng ineering science prototypes. At Trace, we have found that HALT testing should also be performed on actual production units, to ensure that the transition from engineering design to production design has not resulted in a loss of product quality or robustness. Some engineers may consider this approach as scientifically reasonable, but financially unrealistic. However, the cost of HALT testing is much less than the cost of field failuresHIGHLY-ACCELERATED STRESS SCREENING (HASS)HASS testing is an on-going screening test, performed on regular production units. Here, the idea is not to damage the product, but rather to verify that actual production units continue to operate properly when subjected to the cycling of environmental variables used during the HASS test. The limits used in HASS testing are ground on a skilled interpretation of the HALT testing parameters. The importance of HASS testing can be appreciated when one considers todays typical manufacturing scenario. Circuit board s are purchased from a vendor who uses materials purchased from other vendors. Components and sub-assemblies are earned from manufacturers all over the world. Often, the last examination assembly of the product is performed by a subcontractor. This means that the quality of the final product is a function of the quality (or lack thereof) of all the components, materials, and processes which are a part of that final product. These components, materials, and processes can and do change over time, thereby affecting the quality and reliability of the final product. The best way to ensure that production units continue to have reliability objectives is through HASS testing.RELIABILITYReliability is defined as the probability that a whirl will perform its required function under say conditions for a specific period of time. Predicting with some degree of Confidence is very dependent on correctly defining a number of parameters. For instance, choosing the distribution that matches th e information is of primary importance. If a correct distribution is not chosen, the results will not be reliable. The authority, which depends on the sample size, must be adequate to make correct decisions. Individual component failure rates must be based on a large enough population and relevant to truly invent present day normal usages. There are empirical considerations, such as determining the slope of the failure rate and calculating the activation energy, as well as environmental factors, such as temperature, humidity, and vibration. Lastly, there are electrical stressors such as voltage and current. Reliability engineering can be somewhat abstract in that it involves much statistics yet it is engineering in its most practical form. Will the design perform its intended mission? point of intersection reliability is seen as a testament to the robustness of the design as well as the integrity of the quality and manufacturing commitments of an organization.One of the fundamen tals of understanding a products reliability requires an understandingof the calculation of the failure rate. The traditional method of determining a products failure rate is through the use of accelerated vibration operating life tests performed on a sample ofdevices. The failure rate obtained on the life test sample is then extrapolated to end-use conditions by means of predetermined statistical models to give an estimate of the failure rate in the field application. Although there are many other stress methods employed by electronic assembly manufacturers to fully characterize a products reliability, the information generated from operating life test sampling is the principal method used by the industry for estimating the failure rate of a electronic assembly in field service.Failure Rate ()Measure of failure per unit of time. The utile life failure rate is based on the exponential life distribution. The failure rate typically decreases slightly over early life, then stabilizes until wear-out which shows an increasing failure rate. This should occur beyond useful life.Failure In meter (FIT)Measure of failure rate in 109 device hours e. g. 1 FIT = 1 failure in 109 device hours. inwardness Device Hours (TDH)The summation of the number of units in operation multiplied by the time of operation.Mean Time between failures (MTBF)Reliability is quantified as MTBF (Mean Time amidst Failures) for repairable product and MTTF (Mean Time To Failure) for non-repairable product. A correct understanding of MTBF is important. A power supply with an MTBF of 40,000 hours does not mean that the power supply should last for an average of 40,000 hours. harmonise to the theory behind the statistics of confidence intervals, the statistical average becomes the true average as the number of samples increase. An MTBF of 40,000 hours, or 1 year for 1 module, becomes 40,000/2 for twain modules and 40,000/4 for four modules. Sometimes failure rates are measured in percent failed p er million hours of operation instead of MTBF. The FIT is equivalent to one failure per billion device hours, which is equivalent to a MTBF of 1,000,000,000 hours. The formula for calculating the MTBF is = T/R. = MTBFT = total timeR = number of failuresMTTF is stands for Mean Time To Failure. To distinguish between the two, the concept of suspensions must first be understood. In reliability calculations, a suspension occurs when a destructive test or note has been completed without observing a failure. MTBF calculations do not consider suspensions whereas MTTF does. MTTF is the number of total hours of service of all devices divided by the number of devices. It is only when all the parts fail with the same failure mode that MTBF converges to MTTF.= T/N = MTTFT = total timeN = government issue of units under test.If the MTBF is known, one can calculate the failure rate as the inverse of the MTBF. Theformula for () iswhere r is the number of failures.Once a MTBF is calculated, proba bility can derive from following equivalenceR(t) = e-t/MTBFConfidence Level or Limit (CL)Probability level at which population failure rate estimates are derived from sample life test. The upper confidence level interval is used.Acceleration Factor (AF)A constant derived from experimental data which relates the times to failure at two different stresses. The AF allows extrapolation of failure rates from accelerated test conditions to use conditions.Since reliability data can be accumulated from a number of different life tests with several different failure mechanisms, a comprehensive failure rate is desired. The failure rate calculation can be complicated if there are more than one failure mechanisms in a life test, since the failure mechanisms are thermally activated at different rates. Equation 1 accounts for these conditions and includes a statistical factor to obtain the confidence level for the resulting failure rate.THE BATHTUB CURVEThe life of a population of units can be d ivided into three distinct periods. Figure 1 showsthe reliability bathtub curve which models the cradle to grave instantaneous failurerates vs. time. If we follow the slope from the start to where it begins to flatten out thiscan be considered the first period. The first period is characterized by a decreasing failurerate. It is what occurs during the early life of a population of units. The weaker units dieoff leaving a population that is more rigorous. This first period is also called infantmortality period. The next period is the flat portion of the graph. It is called the normallife. Failures occur more in a random sequence during this time. It is difficult to predictwhich failure mode will manifest, but the rate of failures is predictable. Notice theconstant slope. The third period begins at the point where the slope begins to increase andextends to the end of the graph. This is what happens when units become old and begin tofail at an increasing rate.Reliability Predictions Me thodsA lot of time has been fatigued on developing procedures for estimating reliability of electronic equipment. There are generally two categories (1) predictions based on individual failure rates, and (2) demonstrated reliability based on operation of equipment over time. Prediction methods are based on component data from a variety of sources failure analysis, life test data, and device physics. For some calculations (e.g. military application) MIL-HDBK-217 is used, which is considered to be the standard reliability prediction method.A simple failure rate calculation based on a single life test would follow equation 1.= failure rate.TDH = Total Device Hours = Number of units x hours under stress.AF = Acceleration factor, see Equation 3.Since reliability data can be accumulated from a number of different life tests with several different failure mechanisms, a comprehensive failure rate is desired. The failure rate calculation can be complicated if there are more than one failure mechanisms in a life test, since the failure mechanisms are thermally activated at different rates. Equation 1 accounts for these conditions and includes a statistical factor to obtain the confidence level for the resulting failure ratewhere, = failure rate in FITs (Number fails in 109 device hours) = Number of distinct possible failure mechanismsk = Number of life tests being combinedxi = Number of failures for a given failure mechanism i = 1, 2, TDHj = Total device hours of test time for life test j, j = 1, 2, kAFij = Acceleration factor for appropriate failure mechanism,i = 1, 2, kM = 2(, 2r +2) / 2where,2 = chi square factor for 2r + 2 degrees of freedomr = total number of failures ( xi) = risk associated with CL between 0 and 1.2.2 SOLDER PASTE2.1.1 ROLE OF SOLDER PASTE IN REFLOWINGSolder bedspread is a compounding mixture of a flux composition and a highly grinded, powdered solder metal alloy that is normally used in the electronics industry to join processes. And also it is call as a attachment medium between the device interconnection features and the PCB itself. The components of a solder feast are specially intentional for excellent picture and reflow characteristics.In normal case of the surface mount soldering process involves placing the substrate and a small amount of solder paste in a printed circuit board. After that the system will be heated until the solder reflows, forms an electrical connection between the solder pad and the electrical contact of electronics part. After this reflow finished it forms both an electrical and mechanical connection between the electronics components and the printed circuit board.2.1.2 SELECTION CRITERIA OF A SOLDER PASTESelection of a solder paste is very important factor for reflowing process, reliability its quality. The following factors are considerable for a good solder paste 6.The size of the solder alloy particles which are in the solder pasteThe tendency to form voidsThe properties of the flux me dium of the solder pasteAlpha particle emission rateThe design of the print to be used for depressionThermal properties of the solder pasteElectrical properties of the solder pasteCHAPTER 03EXPERIMENTS3.1 MATERIALS AND METHODOLGYSOLDER PASTEBasically I used solder paste in same procedure. The details of solder paste used in the experiment are given in the following tableTYPE OFSOLDERPASTEALLOYSCODINGPARTICLESIZEMETALLOADINDS1Sn95.5Ag4Cu0.5S2Sn42Bi57Ag1Table 3.1.1 types of solder paste used in experimentFor this project all above solder paste should be in a container with appropriate labelling and identification on it to distinguish it from the Tin lead solder paste. The solder paste should be stored in a refrigerator between 35 45 F. and should be allowed to come room temperature for minimum four hours earlier doing the solder paste printing. Once it has finished the using solder paste must replace to the refrigerator since it can not be at room temperature over 24 hours. The s elf life of the lead free solder pastes may be reducing from the typical six month.The above guidelines are strictly followed in this project. Because it is not only for guarantee the quality of solder paste but also a good way to reduce the errors that may affect the final results of the project.SOLDER PASTE PRINTINGIMPORTANT OF SOLDER PASTE PRINTINGSurface mount technology (SMT) is used extensively in the electronics industry. Surface mount components are potentially more reliable products can be designed and manufactured using the SMT.The solder paste stencil printing process is very critical and important step in the surface mount manufacting process. Most of all the soldering defects are due to problems dealing with the screening process. So we wishing to a major consideration in operation and set up steps in stencil printing process. When we are monitoring these factors carefully we can minimize the defects.The main purpose of printing solder paste on PCB is to supply solder alloy to solder joint to correct amount. That only print must be aligned correctly and can get a perfect component placement.PRINTING PROCESS PARAMETERSSome of the following parameters are very important to printing process.STENCILStencils are using for the solder paste slip easily off the aperture edges and thereby secures a uniform print. For this process we using electro formed stencils. Because of these stencils have very shape edge and slightly conic. Generally a stencil is mading from cupper or nickel 12.ENVIRONMENTALDust and scandal from the air that will reach the PCBs and stencils can be defects poor wet ability in the reflow soldering process. So PCBs should be stored in sealed packages and cleaned before use.SOLDER PASTESolder paste characterise must be controlled to achieve a maximum production results. Some of the factors are given down the stairs 12.Percent of metalViscositySlumpSolder ballsFlux activity working life and shelf lifeSTENCIL PRINTING PARAMETERSStencil p rinting parameters are very important factors in printing processes to achieve a best yield. The following parameters must be monitors and controlled in a printing process.squeegee pressure = 8kgSqueegee speed = 20 mm/sSeparation speed = 100%Printing gap = 0.0 mmThese factors and limit can be adjust for our project purposeSOLDER PASTE PRINTING EQUIPMENT AND PROCESSFigure 3.2.4.1 DEK 260 stencil printing machineThe DEK 260 stencil printing machine is used to print solder paste on the circuit board. This DEK 260 stencil printing machine has two main functions.Registers the position of the product screen with in the print headPositioning the circuit board below the stencil, to ready for the print cycle.The boards to be print are supported on magnetic tooling and held by nihility caps arranged on the plate to guarantee the board steady during the printing on to the board. The first step of the experiment is to do the solder paste printing on to the board.In this project unable to get m etal stencil, so circuit boards are printed by hand, below procedure followed to print PCBPut weights onto the stencil to fix itroll the squeegee over the stencilsolder paste presses through the aperture onto PCBseparate stenciltwo circuit boards are printed with solder paste for each solder paste types. Totally 4 circuit boards printed.SOLDER PASTE REFLOWPROCESS/PROFILEFigure 3.3.1 reflow ovenTo achieve a good reliable solder joint the reflow process is very important. When doing the reflow with sn-pb solder paste often performed at minimum peak temperature of about 203. It is 20k above the sn pb liquid state temperature.When doing the reflow process with lead free solder paste it has to be performed at a minimum peak temperature of 230. It is just 13K above the melting temperature.It is generally evaluate that lead free solders requires a higher reflow temperature up to 220 230.Reflow profile will be affecting the reliability of a solder joint. Because it is a major factor that influence the formation of the intermettallic layers in a solder joint. Intermettalic layer is a critical part of a solder joint. An intermettalic bond thickness should be thin. consequently a good reflow profile must produce solder bumps with a thin intermetallic layer.PREHEAT ZONEIn this regularize indicates how the temperature is changing fast on the printed circuit board. The ramp-up rate is usually between 1-3 per second. If this rate exceeds there will be damage to components from thermal shock. Only In this preheat zone the solder paste begins to evaporate. So if the rise rate is too low the evaporation of flux is not incomplete. This will affect the quality of the solder joint.THERMAL solicit ZONEIt is also called the flux activation zone. In this thermal soak zone it will take 60-120 seconds for removal of solder paste and activation of fluxes. Solder spattering and balling will be happen if the temperature is too high or too low. End of this thermal shock zone a therma l counterbalance will complete the entire circuit board.REFLOW ZONEIn this reflow zone only the maximum temperature will be reached. In this zone we have to consider about the peak temperature that is the maximum allowable temperature of entire process. It is very important to monitor this maximum temperature exceeds the peak temperature in this zone. It may cause damage to the inbred dies of SMT components and a block to the growth of intermetalic bonds. we have to consider the profile time also. If time exceeds than the manufactures specification it also affect the circuit boards quality.3.3.4 COOLING ZONEIn the reflow process the last zone is cooling zone. A proper cooling inhabits excess intermetallic formation or thermal shock to the components. Generally the cooling zone temperature range is 30 100.In this project I selected the following temperature profiles. This temperature profile is stranded reflow profile for lead free soldering.Zone 1 220Zone 2 180Zone 3 170Zone 4 19 0Zone5 233Zone 6 233Totally 4 circuit boards were printed. Choosing of good reflow profile was not involves any defects or damages in the printed circuit board.Figure 3.3.4.1a printed circuit board after reflowSET UP EVENT DETECTORThe constructed PCBs were connected with event detector by ribbon data cable. Ribbon cable addressed according to Analysis tech STD series event detectors manual . pins 1 to 32 function as source point and pins 33 to 37 function as ground point.To obtain closed loop circuit to monitor the demeanor of PCB components, PCB boards 1, 2, 3 and 4 connected to channel 1,2,33 and 34 respectively.Ribbon cableAfter connected ribbon cable with event detector and enviromenrt chamber, channels are assigned in WIN DATA LOG software which supplied with event detector.For this test following settings define for data acquisitionINVESTICATING RELIABILITY OF SOLDER JOINT UNDER VIBRATION CHAMERIn this study, PCBs were used in Variable Frequency Vibration Test to analyse the dynamic response of PCB assembly subjected to random vibration loading. The PCB specimens were tested at different acceleration levels to assess the solder joint reliability subjected varying G-level vibration loads(G is the gravitational acceleration), respectively. Vibration tests were accomplished by using an electro dynamic Shaker

Monday, June 3, 2019

Factors Causing Youth Violence Measures To Prevent It Criminology Essay

Factors Causing Youth Violence Measures To Prevent It Criminology EssayThe Diagnostic and Statistical Manual of Mental Dis casts (DSM-IV-TR) of the American psychiatrical Association (2000) describes the essential feature of a cope disoblige diagnosis is a persistent pattern of behavior, which violates the basic rights of others or abbreviates major societal norms or rules as demonstrated by a child. Oppositional defiant disorder is characterized by negative, disobedient, or defiant behavior that exceeds the normal testing behavior that near children exhibit and may later lead to a diagnosis of conduct disorder in some young. Many of the children diagnosed with conduct disorder end up committing sorry offenses beca determination they inadequacy empathy which overwhelms them to the extent that they act out in the face of social stigma or criminal laws. The present revaluation has four purposes (a) to identify the clinical and theoretical framework of fiery juvenilitys, (b) to focus on specific risk factors that contribute to early age craze, (c) to outline protective factors that buffer youth personnel, and (d) to explore burden system-ecological therapeutic methods to address youth force-out. For these purposes several(prenominal) articles and the data collected will be discussed.Youth ViolenceIn recent years attention has been pore on the app atomic number 18nt rise in youth violence. Most of this attention has been fueled by several utmost profile cases in the media. Events comparable the aquilegia shootings and the Virginia Tech massacre provide good case examples. Violence as defined leg all in ally refers to the use of physical force, specifically physical force with hostility that attempts to or harms someone (Webster, 2010). Youth violence refers to violence that has started at the time of life between childhood and maturity. A number of behaviors such(prenominal) as the use of weapons, physical/sexual assault, bullying, etc., may be a part of cherry behavior in three-year-old bighearteds as illustrated in the cases de noned above.Studies have analyzed the preponderance of mental disorders and or behavioral issues such as schizophrenia, post-traumatic stress disorder, conduct disorder (CD) and as of late bipolar disorder in the development of violent youth (Juvenile Delinquency, 2010). For the purpose of this literary review I will focus on conduct disorder as the precursor to asocial personality disorder which statistics show has been diagnosed in 80-85% of incarcerated criminals (Long, 2009). Conduct disorder accounts for approximately 50% of incarcerated youth males and females (Fazel et al., 2008).Conduct disorder develops during childhood and manifests itself during adolescence. The DSM-IV-TR Codes 312.xx (where xx varies upon the specific subtype exhibited) delineates that adolescents diagnosed with conduct disorder disregard social norms and show lack of empathy. Violent youth who have gone through the criminal justice system on several occasions argon likely to have been diagnosed with conduct disorder. This is crossly true of those violent youth who time and time again show a disregard for their own and others gumshoe and property (Juvenile Delinquency, 2010).A documented history of conduct disorder before the age of fifteen represents one of the criteria used in diagnosing a new-made adult with antisocial personality disorder. An antisocial personality disorder diagnosis indicates a greater risk on the part of a young adult of exhibiting persistent and serious criminal behavior. Both conduct disorder and antisocial personality disorder atomic number 18 characterized by unpredictable violent behavior and lack of empathy.Consequently, adolescents who have persistently been involved with the criminal system and have been diagnosed with conduct disorder are at a higher risk showing signs of antisocial personality disorder as they develop into adults (Conduct Disorder, 201 0). Antisocial personality disorder is a common diagnosis for serial killers who often fantasize somewhat kill several victims and then fulfill their impulsivity when they are no longer capable of suppressing it.Youth violence develops in different ways. Children/ adolescents who are diagnosed with oppositional defiant disorder and conduct disorder exhibit problem behavior early in childhood. This problem behavior can persist and sum up as the child develops into a young adult. Studies suggest that aggression in childhood is a good predictor for the same in adolescence and young adulthood (CDC, 2002).The research indicates that there are several risk factors that contribute to youth violence. There are individual factors that are comprised of biological, psychological, and behavioral issues which may be exhibited in childhood or adolescence. A childs family, friends, refining and social setting may influence the individual factors. Of particular interest in most studies is the feign that family has and which is greatest in childhood and the peer impact which is of greater influence in adolescence (CDC, 2008).Some of the individual factors observed are low IQ (substandard academic execution of instrument), attention deficit hyperactivity disorder, drug and/or alcoholic beverage abuse, tobacco use, early history of problem behavior and or violent victimization. The latter is strongly associated with youth violence. A link between low IQ and violence is strongest among boys who have the following traits dysfunctional family, exposure to violence, antisocial intuitive feelings/attitudes, history of treatment for steamy issues, strong stressors, poor social cognitive abilities, poor lust control and lower socioeconomic status (CDC, 2002).Parental behavior and family environment are central factors when it comes to youth violence. Parents who do not monitor and supervise their children and who match with harsh corporal punishment have been shown to be str ong predictors of youth violence (CDC, 2008).As indicated, the onset of violent behavior in youth is strongly linked to parental conflict in early childhood as well as poor attachment between children and parents. In extension traits such as a large number of children in the family, a mother who had her first child at an early age, possibly as a teenager, and a low level of family cohesion have been shown to contribute to youth violence. These factors can have a detrimental effect on a childs social and emotional functioning and behavior barring the lack of social supports (CDC, 2002). Consequently, violent youths who have witnessed violence in the home, and or have been physically or sexually step may see violent behavior as an acceptable way to resolving conflict (CDC, 2002).Social influences, in particular, peer compel during adolescence may normally be seen as positive and important in shaping interpersonal relationships. Nevertheless, these influences may also have a negativ e effect if the peer pressure stems from aggressive and violent youth. That is, delinquency can cause peer bonding which, inversely causes delinquency (Harding, 2009).In fact, young adults with depression who socialize with youth offenders they are more likely to act out violently towards others. Harding (2009), indicated that the most significant contributing factors to youth violence were depression and having youth offenders as peers in addition to parents psychological abuse of a partner, antisocial personality, negative relationships with adults and family conflict. The composition of a family has also been shown to be a significant factor in the development of violent behavior in youth. Findings from studies conducted in New Zealand, the United Kingdom and the United States suggest that there is a higher risk for violence in youth from single-parent households (CDC, 2002).The risk factors attributed to family include dysfunctional family functioning, lack of child supervision, parental substance abuse or criminal history, parental lack of formal education, harsh and/or authoritarian parenting styles or inconsistent disciplinary practices. In terms of peer risk factors these are socializing with peers that are in gangs, who are themselves juvenile delinquents, universe socially rejected by others, no exponentiation in extramarital activities a little interest in school or school performance (CDC, 2009).Likewise the social groups in which children and adolescents live have a significant role in how they relate to their parents, friends and the circumstance in which they may be assailable to situations that lead to violence. Consequently, males in urban areas will most likely be involved in violent behavior than those life in rural areas. withal in urban settings children and adolescents who live in neighborhoods with high levels of crime are more likely to be involved in violent behavior than those living in other neighborhoods. In addition, a correla tion has been found between children and adolescents who come from a low socio-economic status and youth violence (CDC, 2008). A field survey of young people in the United States indicated that the prevalence of self-reported assault and robbery among youths from low socio-economic classes was about twice than among middle-class youths (CDC, 2002). The effects that youth violence has on a familiarity or community risk factors include neighborhoods that are in social disarray, little community cohesiveness, increase in family disruption, increase in transiency, greater numbers of poor residents and less economic opportunities (CDC, 2009).It is of equal importance to note the influence of culture on youth violence. There are cultures which endorse violence as an accepted manner to resolve conflicts. In these cultures the young adopt the norms and values that support violence. These cultures lack the ability to provide their youth with non-violent alternatives to resolve conflicts an d consequently have been shown to have higher rates of youth violence. A study by Bedoya Marin and Jarramillo Martinez on gangs in Medellin, Colombia, analyzed how low-income youths are influenced by the culture of violence, in society in general and in their particular community. The authors indicated that the community enables a culture of violence through the growing acceptance of loose money and of whatever means are necessary to obtain it, as well as through corruption in the police, judiciary, military and local organisation (CDC, 2002).When considering the possible biological factors which contribute to youth violence, studies have focused on areas such as injuries and complications associated with pregnancy and delivery. The interest in these areas is fueled by the belief that they may contribute to neurological damage and in debate lead to violent behavior. The CDC noted that complications during delivery have been shown to contribute significantly to future violence whe n a parent had a history of psychiatric illness. It should be noted that complications during delivery when in conjunction with other familial factors is the stronger predictor of youth violence (CDC, 2002).Other studies of interest have indicated that low heart rates-studied in males have a correlation with behaviors such as sensation seeking and risk taking. These behaviors may act as a catalyst to violence in that they provide the necessary stimulation and arousal levels (CDC, 2002). Deficiencies of executive functions of the brain which are housed in the frontage lobe may be connected to impulsiveness, attention problems, low intelligence and low educational attainment. Additional deficiencies include the inability to sustain attention and concentration, abstract argumentation and concept formation, goal formation, anticipation and planning, effective self-monitoring and self-awareness of behavior, and inhibitions regarding inappropriate or impulsive behavior (CDC, 2002).The l iterature indicates that hyperactivity, impulsiveness, poor behavioral control and attention problems are behavioral/ personality factors that may precede violent acts by youths. Hyperactivity, high levels of daring or risk taking behavior, poor concentration and attention difficulties in youth younger than thirteen years have been shown to be good predictors of youth violence (CDC, 2008).The CDC also found that among some juvenile offenders, situational factors may act as a catalyst to youth violence. In order to conduct a situational analysis of the events it is necessary to determine the motives for the violent behavior, where the behavior occurred, whether alcohol or weapons were present, all parties involved to include the victim and aggressor, and if other actions were involved such as a robbery that would lend itself to violence (CDC, 2002).In terms of gender, the literature indicates that most of the perpetrators of youth violence are males. Feminist theorists who have analy zed this phenomenon have indicated that the concept of masculinity may put males more at risk to be violent. Behaviors such as appearing to be tough, powerful, aggressive, daring and competitive are ways in which males express their masculinity. Nevertheless, expressing these behaviors may be conducive to males participation in antisocial and criminal behavior. It should be noted that males may act in this manner due to societal pressure to conform to masculine cultural standards like in Colombia as mentioned front. However, one must keep in mind that males may be biologically more aggressive and greater risk takers than females (Juvenile Delinquency, 2010).This review of the literature shows that youth violence is a growing problem that affects and is affected by family, community and society at large. More and more children are not attending school out of fear of what can happen on their way to school or at school. A nationwide survey indicated that about 6% of high school studen ts reported not going to school on one or more days in the 30 days preceding the survey (CDC, 2009).Additional ways in which Youth violence impacts the community at large are disrupts social services, decreases property value, decreases productivity, and it raises the cost of health care (Mercy et al., 2002). Health care is a topic that is on the nations political forefront. It is impacted by youth violence which contributes to the costs of health care and eudaemonia services. The CDC reports that violent youth are also involved in a range of crimes and other problems which include truancy, dropping out of school, substance abuse, compulsive lying, wise driving and high rates of sexually transmitted diseases. According to the CDC more than 780,000 young adults age ten to twenty sustain injuries due to violence and are treated in emergency rooms yearly (CDC, 2009).Factors that have been shown to buffer the risk of youth violence include individual/family protective factors listed a s high involvement with parents, high parental academic expectations, healthy family communication, good familial and/or adult support, healthy social orientation, high IQ and/or fool point average and no tolerance for antisocial behavior. The consistent presence, during at least one, of parents when their children wake up, arrive home from school, during dinner, at bed time and involvement in their social activities are also seen as protective factors . Peer/social protective factors are noted as involvement in extracurricular activities and an interest and commitment to school (Resnick et al., 2004).Based on the literature review, youth violence is embedded and linked to traits of the youth, youths family, peer group, school environment and community. A socio-ecological pretending would aim to ease the risk factors (individual/family, peer/social, etc.) by focusing on the youth and youths family strengths and doing so on a highly individualized and comprehensive basis. Of partic ular interest and focus would be the protective factors outlined earlier. This could be provided via home-based family services in order to assists those violent youth and their families who have limited access to therapeutic services. This would help the therapist to focus on parental empowerment in order to change the natural social network of the youth in order to maximize the treatment outcomes.The therapist would focus risk factors in the youths social network that are contributing to their problem behavior. The goals may include but would not be limited to amend social support and network system, getting the youth involved in positive extracurricular activities, minimizing the youths association with juvenile delinquents, improving family functioning and communication, and improving the parenting skills of caregivers. The techniques used can be drawn from cognitive behavioral, behavioral and family therapies.The therapy sessions could take place at home, school or a community environment (a comfortable setting for the youth and the youths family). The treatment plan would be agreed upon with the help of family members and should then be driven by the family and not the therapist. In doing so the therapist would empower the family to promote healthy changes through the mobilization of the child, family and community resources.Given the information provided on youth violence, the therapist should focus specifically on the risk factors in the child/adolescent, and familys social networks that are linked to the violent behavior. Therefore, special attention would be given to improving a youths observation tower on academics and academic performance, improving social and familial support systems, and decreasing the influence of violent peers by removing the youth from the negative environment.These therapeutic gains would in turn have a positive effect on the youth, the youths family and the community at large. This may begin to address and prevent the heal th care issues outlined earlier and other subsets of youth violence such as school shootings and cyber bullying to name but two.

Sunday, June 2, 2019

A Nineteenth Century Ghost Story in The Turn of The Screw by Henry Jame

A Nineteenth Century Ghost Story in The Turn of The Screw by Henry James The Turn of The Screw is a classic Gothic ghost novella with a wicket twist set in a grand old house at Bly. The story is ambiguous we never fully k instantly whether the apparitions exist or not and we are left with many another(prenominal) more questions than answers. The Governess is left in charge of two young children, Miles and Flora, of whom she later becomes obsessed with, describing them as angelic. She has no contact with her employer from London, the childrens enigmatic uncle once there, sparking suspicions of the children being unwanted. The unidentified Governess obsessive nature is taken to another level, with the darker side of Bly appearing. Her sanity is called into question with her continued revelations of apparitions around the familys country residence. The story itself could not have had a large twist in it, from being overwhelmed by the beauty and innocence o f the two orphans under he care to being convinced that ghosts of her predecessor and the masters former valet, disregard Jessel and Peter Quint, both who die in mysterious circumstances, have come to possess the souls of her charges. The Governess begins to take ever more desperate mea legitimates to protect them, but is it comme il faut? A typical Gothic story in many respects, The Turn of the Screw conforms to our expectations by sharing many key features, style and themes typical to 19th century horror fiction. A gothic story is a type of romantic fiction that predominated in English Literature in the last three of the 18th century and the first two decades of the 19th century. The setting for this type of st... ...riously wrong with her. Taking all of these points into account all of these points, I am sure that you now agree that The Turn of the Screw is a typical 19th century gothic ghost story. The story itself has many characteristics t ypical of a gothic story and it is based around two apparitions, which is a necessity in any ghost story. Gothic stories were very popular during this period due to Darwins book, The commencement of Species which hugely questioned Christian beliefs. People were no longer sure of religion, and became very superstitious, with Ghost stories becoming very popular. They had always thought god came first now science was starting to take over. In the 19th century people were unsure intimately what was real in the world. The Victorians did not know what to believe about in their world and spirituality.

Saturday, June 1, 2019

The Ending of Franz Kafkas Metamorphosis :: Metamorphosis essays

The Ending of Franz Kafkas Metamorphosis         At first glance, the final four pages of Franz Kafkas novel The Metamorphosis seem to be meaningless.  This assumption, however, is anything but the truth.  The final four pages, although seeming to be of no importance, serve to show the reader how the Samsa family changes as a result of the main characters, Gregor Samsas, death.  The familys changes are shell exemplified in two different scenes the scene at the kitchen table, and the scene on the trolley.         During the scene at the kitchen table, there is a common change among the family members their new willingness to do things independently.  Their bold act of writing letters of excuse is a clear example of their new independence.  Prior to Gregors death, the family relied completely on Gregors financial support and had lesser in terms of responsibilities.  Kafka explains this lack of work whe n he writes, they Gregors parents had formed the conviction that Gregor was set for life in his firm . . . they were so preoccupied with their immediate troubles that they had muddled all consideration for the future,(17).  By taking the initiative and writing to their employers, Gregors family proves that they no longer depend on Gregor.           The scene at the kitchen table proves revealing formerly again when Mr. Samsa announces that he will fire the cleaning lady (17).  By doing so, Mr. Samsa demonstrates that he has changed and can take responsibility.  Grete (Gregors sister) and Mrs. Samsa also show that they have changed by non contesting Mr. Samsas decision to fire the cleaning lady.  In retrospect, firing the cleaning lady is an additional step towards change from the past.           The second revealing scene is the scene on the trolley.  In this scene, Kafka reveals the fami lys plans for the future, as well as the significant changes in Grete.  He also emphasizes that leaving the apartment together is something they the family had not done in months(58).  Demonstrating again their change to independence.  Similarly, the familys plan to buy a smaller and cheaper apartment (58) further proves that they have become independent. Kafkas remarks pertaining to Grete reveal a different kind of change.  During all of the uproar involving Gregor, Grete matured both physically and mentally.