Monday, September 30, 2019

Ethnic Background Essay

My name is Stephanie Flowers and until this class assignment I never thoughtfully considered what my actual ethnic background was. After looking up the meaning of my last name I found it to be of Welsh origin. This means that I could possibility trace my family roots back to Great Britain. After reading chapter one of Race and Ethnic Relations I discovered that being a part of a certain ethnic background does not mean that you have to be part of that race, but you have to practice the common cultural traditions of that subculture. So based off of my name some people might think that I was English. I grew up with a few household traditions that I consider to be a part of my ethnic background. To begin with, I would consider my family to practice Irish traditions. We always have huge St. Patrick’s Day party at my house that turns into an all-day drinking and eating celebration. I might not know all the reasons behind this celebration and what I consider a part of my family tradition, but it is still a part of my ethnic background in my opinion. Drinking is a big part of being Irish and in my family most celebrations do involve some type of alcohol. In Irish culture it is normal to introduce alcohol to children before the actual drinking age, I think this is good because we grow up with this around us and never take consumption of alcohol to extremes because it such a part of everyday life. I am would also consider myself to be of German ethnicity. My family has a good luck tradition for New Years. We eat pork roast and sauerkraut in hopes that the next year will be filled with lots of happiness and joy. I personally love this tradition and get upset when I sometimes only get to eat this meal once a year. It is one of my top five meals to eat that my grandma cooks. Being from the United States of America I do think that I have picked up so many different traditions from all of the friends that I have had throughout my life. This is why I love living in this country I get to experience so many different cultures and ethnic backgrounds. My boyfriend is of the Catholic religion, his family maintains this culture by attending mass for Christmas, Easter, and whenever they feel the need strengthen their relationship with god. This is the way that they maintain their ethnic background. They are a common group of people who believe the idea that they all share the share cultural heritage. This stems from his Irish background where Catholicism is the main religion practiced in that country. So to me this is the way to stay in touch with being from Ireland even though they now live in America. My cousin Rachael is half Mexican. Her mom was an immigrant and came to this country to start a better life. I love getting to go to their house in Texas; her mom makes the best Mexican dishes. What I have noticed from her Mexican culture and ethnicity is that food is a way of bringing the family together, it is a time of bonding in their household. I believe that by cooking traditional Mexican food it keeps her in touch with her roots and makes her feel better connected to her homeland. After doing this assignment I am very interested in doing more research on my family roots and discovering more about who and where I came from. Like it says on the Ancestry. com commercial, â€Å"you don’t have to know what you’re looking for; you just have to start looking. † So that’s what I am going to start doing. Also, I would like to conduct interviews with the elderly people in my family and possibility start filling out my family tree, which would be an easy way to connect the dots of my family history. I will still continue to maintain my Irish and German background by drinking on St. Patrick’s Day and eating pork roast and sauerkraut. I am very proud to be an American and I wish I didn’t take all of the rights that I have in this country for granite.

Sunday, September 29, 2019

Cultural Identity Interview and Analysis

An interview with a member of the Mexican American community was conducted on December 1st, 2007. This research will provide a summary of that interview; particularly, it will include a description of the rules, norms, traditions, and values of Mexican American culture. The research will also discuss: a) how the assumptions regarding cultural norms affect the interviewee’s behavior in his daily life; b) any disadvantages related to his culture being outside â€Å"the norm† and how he reacts to that; c) any advantages related to being assimilated to the â€Å"normative† culture and how he reacts; d) his sources of strength or support; e) an analysis of the four dimension theory and how it relates to the interviewee’s life. Finally, the research will provide some insight on how one can effectively communicate with people of other cultures. In fact, communication with various members that belong to other ethnic groups (i.e. Arab Americans, Hispanic Americans, African Americans, etc) may be difficult some at times because of the cultural difference that exists between the groups. Jose Luis Aguilar was born in Tijuana, Mexico on January 7th, 1972. Mexican is the ethnic group he belongs. He lived in Mexico for 29 years. In 2001, he immigrated in Los Angeles, California. Job opportunities, presence of family members, and the presence of a massive Mexican community were the factors that brought him to immigrate into the United States. Mexican Americans are the largest Hispanic or Latino ethnic group in the United States. According to the 2000 Census, approximately 20 million Hispanic or Latinos of the 35 million in the United States are Mexican Americans (U.S. Census, 2000). Mr. Aguilar’s native language is Spanish. However, during the past years spent in California, he was able to learn basic English, although it remains a second language as the majority of the Mexican American who live in the United States. Richard Schaefer stated â€Å"as of 2002, about 23 percent of Mexican Americans are English dominant, 26 percent are bilingual, and 51 percent are Spanish dominant† (Schaefer, 2006, p. 241). Mr. Aguilar’s religion is Catholic. Indeed, Mexican Americans represent â€Å"the largest number of Catholic immigrants to the United States comes from Mexico; Mexico also sends the largest number of Protestant immigrants to the United States† (Murray, 2006). Mr. Aguilar is married. He has a 1 year-old son. Aguilar’s family has a patriarchal organization as the same as other many Mexican American families have. According to Kathleen Niska, Mexican Americans Families â€Å"continuity was characterized by mothers doing tasks inside the house, fathers doing tasks outside the house, and both parents performing toddler and early childhood tasks† (Niska, 2001). One of the Mexican traditions that Mr. Aguilar mentioned during the interview was â€Å"quinceanera†. This ritual is celebrated in church when women reach the age of 15 to thank God that they arrived to this stage of their lives. Similar to a wedding day celebration, the celebration of a girl's fifteenth birthday is a major event in most Hispanic girls’ lives as it means that she begins her journey to adulthood (Mattel, 2001). They are ready to get married. The ritual of quinceanera is viewed not only as a gesture to strengthen faith and family but also as a means to prevent teen pregnancies. A quinceanera also allows for sending a message of sexual responsibility (NC Times, 2008). Mr. Aguilar is an independent contractor mainly for real estate management companies. He provides general maintenance services at $10 an hour. So far, he had limited choices regarding the jobs (e.g. janitorial, landscaping, and maintenance) he could do since he moved into United States. Lack of education and his pending status with INS (he has not received his green card yet, work permit only) were the obstacles that did not allow him to obtain better paid jobs. According to David Spener, â€Å"Mexican immigrant workers play an important economic role inside the United States as well. They constitute a significant portion (8 percent) of the total U.S. manufacturing work force† (Spener, 2000). Mexican Americans are usually have been seen by American companies as â€Å"cheap labor†. Mr. Aguilar shared that members of his culture had been affected by any form of racism, prejudice or discrimination. In particular, he pointed out the bilingualism issue and the tension that the proposition 227 created among his community. Proposition 227 went into effect in 1998 and required that all public school instruction had to be in English. A) How do assumptions about cultural â€Å"norms† impact your interviewee’s behavior on a day-to-day basis? Mr. Aguilar pointed out how one particular assumption regarding Mexican Americans culture affects his life on a day-to-day basis. He mentioned that one of his cousins was a gang member; he was killed months ago. A popular assumption is that Mexican American gang membership is generational which means the membership from a father to a son or from a family member to another one. Therefore, based on this assumption, people believe that he is a gang member. So, Mr. Aguilar’s behavior is direct to prevent anything that may mislead people in this sense, e.g. wearing red or blue, or specific clothing, or having tattoos. B) Does your interviewee recognize any challenges or disadvantages related to her/his culture being outside the â€Å"norm†? How does he/she respond to those challenges? Similar to several other fellow Mexican Americans, Mr. Aguilar is able to communicate in his native language without learning English properly. In fact, Spanish language is commonly spoken in the city of Los Angeles. Almost every place (grocery stores, restaurants, public offices, and so forth) has signs and directions in Spanish language. This massive promotion of Mr. Aguilar’s native language in the United States encouraged by Mass Media represents a disadvantage. Radio and television have also been factors that allowed Mexican Americans, as Mr. Aguilar, to maintain their original cultural values. In fact, in 2004, there were over 678 Spanish language radio stations compared to 1982, when there were 12 Spanish language television stations in the United States. This number more than doubled within 10 years. Several artists (e.g. Jennifer Lopez or Shakira) helped to promote their cultures by singing in their traditional languages (Jandt, 2007). Mr. Aguilar has responded to this challenge by enrolling himself in an adult school in order to improve his English. However, as of today, he is still struggling to write, read and speak English fluently. C) Does your interviewee recognize any privileges or advantages associated with assimilating to the â€Å"normative† culture? How does he/she react to that recognition? Mr. Aguilar recognized that being assimilated to the â€Å"normative† culture has some advantages. In particular, he stated that a positive aspect is that immigrants learn the language of the â€Å"normative† culture; they are able to avoid any form of isolation and segregation. Furthermore, these immigrants likely will not face any prejudice from the dominant society as he experienced during his stay in the United States. During the interview Mr. Aguilar recalled a few family acquaintances with 3rd generation sons and daughters who had an adaptation in the American culture different compared to their parents. In fact, they were able to go to school, learn the language, get a college education, and obtain a good job. They became a part of the American culture. In fact, they celebrate the 4th of July and the Thanksgiving, which are truly American holidays. They also had to learn about professional sports other than Mexican soccer. He now is also watching baseball, basketball, and American football games. D) What does that person cite as being sources of strength or support? Mr. Aguilar cited church and family as his sources of strength or support. As many others fellow Mexicans Americans, Mr. Aguilar gives exceptional importance to religion and family on a day-to-day basis; he is very active in his community especially with humanitarian initiative promoted by his catholic church. Mr. Aguilar is very family orientated. He tries to spend as much time as possible with his family; it may be common to see him doing business with his families around. E) An analysis of the four dimension theory and how it relates to the interviewee’s life A theory from the course that was well related to Mr. Aguilar’s interview is the â€Å"four dimensions of culture† by Geert Hofstede. Particularly, Aguilar’s interview confirmed that in the Mexican culture masculinity is predominant, mostly due its history. Mexican families were mainly patriarchal therefore men were in charge of the family; they were working to provide money and food while women were at home taking care of the children. Mexican culture is based more on collectivism due mainly to the fact that more people with financial difficul ties seek to one another for help or gather together (e.g. two or three families living in the same apartment). Power distance is embodied in the Mexican culture. Mexico is a developing country with significant financial problems. The difference between people (e.g. poor and rich people) is well marked. Finally, the fourth dimension, uncertainty avoidance, is correlated to religion and history of the cultures (Jandt, 2007). Roman Catholic Christian cultures and cultures with Romance languages (e.g. Mexico) tend to score high. In conclusion, communication with various members that belong to other ethnic groups may be difficult at times because of the cultural difference that exists between the groups. However, inclusive language can be an effective way to communicate with such members. Mr. Aguilar and the interviewer are from different culture backgrounds. There were times during the interview where inclusive language was used to avoid miscommunication. Choosing the right words when communicating with members who have different backgrounds may help to prevent miscommunication that may end up stereotyping them on the basis of race, gender, disability, religion, or other factors. Furthermore, conducting research and gathering information on members with different backgrounds may help not only to overcome language issues but also can assist people in becoming more culturally sensitive.References http://nsq.sagepub.com/cgi/content/abstract/14/4/322 http://www.nctimes.com/articles/2008/01/05/faith/17_56_271_3_08.txt

Saturday, September 28, 2019

An Inspector Calls Character Profile Essay

Arthur Birling Husband of Sybil, father of Sheila and Eric. He is the owner of Birling and Company, some sort of factory business that employs several girls to work on machines. He is a Magistrate and two years ago, was Lord Mayor of Brumley. Gerald Croft Engaged to Sheila. His parents, Sir George and Lady Croft, are above the Birlings socially, and it seems his mother disapproves of his engagement to Sheila. He works for his father’s company, Crofts Limited, which seems to be both bigger and older than Birling and Company. Sheila Birling Engaged to be married to Gerald. Daughter of Arthur Birling and Sybil Birling, and sister of Eric. Sybil Birling Married to Arthur. Mother of Sheila and Eric. Sybil is, like her husband, a woman of some public influecnce, sitting on charity organizations and having been married two years ago to the Lord Mayor. She is an icily impressive woman, arguably the only one of all the Birlings to almost completely resist the Inspector’s attempts to make her realize her responsibilities. Eric Birling Son of Arthur and Sybil Birling. Brother of Sheila Birling. Eric has a drinking problem He works at Birling and Company, and his father is his boss. Inspector Goole The Inspector is in his fifties, and he is dressed in a plain dark suit. He initially seems to be an ordinary Brumley police inspector, but (as his name might suggest) comes to seem something more ominous–perhaps even a supernatural being. Edna The parlour maid. Eva Smith A girl who the Inspector claims worked for Birling and was fired, before working for Milwards and then being dismissed. She subsequently had relationships with Gerald Croft and then Eric Birling (by whom she became pregnant).

Friday, September 27, 2019

Business Administration class Internal Analysis Essay

Business Administration class Internal Analysis - Essay Example The mission and vision statements of StilSim Company requires the company to acquire respect in the existing market through providing the best or quality services and going the extra mile to meet the clients’ requirements. This mission statement acts as a guide as it states the positions and expectations of StilSim Company. StilSim accomplishes its mission by adopting strategies such as developing long lasting relationships with its clients, exceeding productivity standards, adoption of the best and cheapest strategies, and developing synergistic teamwork within the organization. Furthermore, they also incorporate their core values in ensuring that their mission and vision are realized. These core values are professionalism and integrity other values include leveraging technology, innovation, and teamwork to satisfy customers, decisiveness and embracing growth opportunities and setting meaningful goals (StilSim Personnel , n.d.) StilSim Company envisions itself as being the best in the region. This goal is to be realized in the next three years. The company plans to train it current workforces to use new tools to satisfy his customers’ demands, and improve on it internal communication strategy. These two are the main point for which the company banks on. However, the company is strategizing on various ways of motivating its performing employees. This tool is mainly used companies to evaluate the existing strengths and weaknesses within a business. It serves as a strategic management tool for identification and evaluation of all function areas within an organization. Moreover, it gives an actual picture of how functional business areas relate to each other. There are strengths that facilitate the existence and performance of any organization. StilSim has been operational for more than twenty years. This can be directly attributed to the company’s long history of providing good services to customers. The quality

Thursday, September 26, 2019

The Clean Air Act Essay Example | Topics and Well Written Essays - 500 words

The Clean Air Act - Essay Example From this study it is clear that the Act created the federal benchmarks for mobile sources of air pollution. The standards also extended to fuels as well as a source of over 187 hazardous air pollutants. Moreover, the Act provided for a cap-and-trade program for the emissions causing rain. Further, the Act culminated into a comprehensive permit framework for chief sources of air pollution. Furthermore, the Act deals with the prevention of pollution in areas with clean air as well as a safeguard of the stratospheric ozone layer.This essay discusses that  the Clean Act has been central to the Health sector. For example, it is estimated that over 22 trillion dollars have been saved in Health-Care Costs. As demanded by the Congress to ascertain the worthiness of the Act, EPA conducted periodic scientific studies assessing the benefits and cost of the Act. The report that was initially produced in October 1977 providing an in-depth retrospective examination of benefits and cost between 1970 and 1990 revealed overwhelming benefits attained by complying with Act over the cost of implementation. The EPA applied dose-response data from the scientific review. The study modelling projected over 184,000 annual reduction in premature deaths, and 674 chronic reduction. Moreover, the study revealed that over 22 million lost days at work, as well as other key outcomes.  The Act has also been central to promoting environmental protection leading to clean air to breath.

DNA Fingerprinting Research Paper Example | Topics and Well Written Essays - 1250 words

DNA Fingerprinting - Research Paper Example The high rate of variation results because DNA fingerprinting relies on non-coding hyper-variable sequences to produce a unique pattern of bands for each individual. DNA profiling relies on the discovery of a broad range of restriction enzymes and their specificity. DNA typing has a wide range of applications from paternity testing, criminal investigations, and population studies to identification of tragedy victims. Other applications are in conservation biology and evolution studies. However, DNA typing presents its challenges especially concerning the amount of sample and accuracy of the process. Introduction DNA fingerprinting has caused a revolution in the world since its description in 1985. Deoxyribonucleic acid is present in all body cells. DNA consists of a sugar, four nucleotides, and a phosphate group. The nucleotides commonly called bases differ in the frequency of occurrence and the order in which they occur. The general DNA structure is similar in all individuals. Howev er, the order and frequency of bases brings a remarkable difference between individuals. DNA fingerprinting presents a profile of an individual’s DNA. The four bases namely adenine, cytosine, thymine, and guanine form unique sequences on the two DNA chromosomes. Studies reveal that there are sequences that encode for essential proteins that are necessary for all cell functions. Geneticists called these coding sequences exons. In addition, there are non-coding sequences, the introns. Studies have revealed that the coding sequences are present in every individual because they code for proteins that drive the life process. These sequences have great similarity in individuals and display limited variation. On the other hand, the non-coding sequences portray a high level of variation and form the basis of DNA profiling. Basis of Fingerprinting DNA profiling is currently the most powerful tool in individual identification. It utilizes the variation of the non-coding sequences to pr oduce unique profiles for each individual (Starr et al 247). The variation in these sequences is too high and this minimizes the probability of two individuals having identical profiles to virtually zero. Due to their high level of variability, geneticists call them hyper- variable regions. These regions consist of about ten to fifteen core sequences that may repeat themselves severally at different locations in the chromosome. The non-coding regions appear in between the coding regions. The frequency of repetition of these highly variable regions results to the differences among individuals. Studies indicate that only identical twins produce similar DNA profiles. The reliability on DNA profiles overrides the traditional fingerprints. The environment contributes greatly to the patterns of the fingers of an individual and the method presented its challenges. DNA fingerprinting presents a great potential in providing accurate profiles that can differentiate two individuals. Closely re lated individuals display a level of similarity in the profiles depending on the level of correlation. Procedure of Running a DNA Fingerprint DNA fingerprinting is laboratory technology involving several procedures. The discovery of restriction enzymes, which cleave DNA at specific recognition sites, formed the stepping-stone to DNA fingerprinting. The initial step in DNA typing is the isolation of DNA from the sample. Samples may be blood, cells, saliva, urine, hair follicles, bones, teeth, and hair fragments (Read 21). Geneticists recognize the existence of both nuclear DNA found in the cell nucleus and mitochondrial DNA in the mitochondrion. The amount of sample available determines the type of DNA isolated. In cases where small samples are available

Wednesday, September 25, 2019

Ive got some SPSS data (graphs and tables etc) can be found in a word Assignment

Ive got some SPSS data (graphs and tables etc) can be found in a word file named (the data) and need to be analysed and interpreted in the form of writing - Assignment Example The study is reduced to the involvement of only two ordinal variables and hence statistical designs using two-variables are only used to obtain results. It has to be mentioned that at the time of the original data collection, private schools had the reputation of being better and more progressive than state schools with respect to English teaching. The first research hypothesis intends to analyze the importance given by teachers to explain the meaning of new English words to students. That is, there is difference between state and private schools in the extent to which teachers explain the meaning of new words in English; specifically teachers will explain words in English more in the private school. The design involves only two independent groups and the dependent variable is the score or rating. The descriptive statistics (Table 1) shows that the study involved 108 students, 67 belonging to the state schools and 41 belonging to private schools. The average of the rating given by the private school students (0.5366) is greater than that of the public school students (0.4627). A frequency chart (Fig. 1) was produced to compare the ratings the given by the students between state and public schools on the extent to which teachers explain the meaning of new words in English. The least rating was 0 representing never and 3 representing always. It is noticed that nearly 40 students of the state schools claimed that their teachers never explained the meaning of the English words. Nearly 44 students of the private schools claimed that their teachers either never or seldom explained the meaning of the English words. Only 1 student agreed that the teachers always explained the meaning. The independent samples T test is used to test the equality of the above given averages. The Levene’s test also is used to find whether the assumption of homogeneity of variables is satisfied. Table 2 shows the results.

Tuesday, September 24, 2019

Animal Rights Research Paper Example | Topics and Well Written Essays - 1000 words

Animal Rights - Research Paper Example In many ways, elements of this group wish that animal rights would be even further reduced due to the fact that animal rights are antithetical to their personal and/or political vantage point. Similarly, on the opposing side, there are those individuals that are deeply troubled by the way our current society disregards the worth and dignity of other life forms. In fairness, among this group as well exists zealots that would advocate for an extreme solution to such an issue such as all individuals becoming vegetarians to affect a positive change on animal rights worldwide. As such, as rationally and scientifically as possible, this analysis will work to lay out a moderate framework from which the author will attempt to explain and understand the relevant arguments that exist on both sides of this debate. The following provides a brief summary of some of the arguments that each side of this debate put forward: The individuals who campaign for a greater degree of protection and animal rights argue the following: - Due to the fact that eating meat necessarily entails the slaughter of an animal, it also entails grief, anxiety, and a high degree of suffering on the part of the animal - Raising animals for slaughter is an inherently callous practice due to the fact that those individuals that are involved in the process begin to become hardened to the hardships and suffering that these animals undergo during this process. - Evidence from a number of physicians and studies have concluded that a meat-eating is not necessarily beneficial to the health of those who eat it. It is verifiable that if the entire planet became vegetarian, the amount of food that would be saved from feeding cattle stock and chickens plus swine and all the other meat that a great deal of our food supply goes towards would be more than sufficient to feed all of those that go without food. The other side of the debate urges multiple levels of justification and rationalization for the killing of animals for many reason: Animals are by nature stupid and incapable of understanding what their role in life is therefore it is not necessary to respect their rights to the same extent that we respect human rights. It is moral and acceptable to use the animal for the needs of the human being if such a use helps the human being(s) to continue to live and thrive

Monday, September 23, 2019

The causes and effects of smoking Research Paper - 1

The causes and effects of smoking - Research Paper Example In addition, it is seen that smoking kills more people than HIV, alcohol, road accidents, suicides, and other murders do. Furthermore, about 90% of lung cancer deaths are due to smoking (Smoking & Tobacco Use). However, when the question as to why do people smoke is raised, the answer comes from Hughes (1) that they smoke because they are addicted to nicotine. Another better clarified answer comes from Cockerham (4) that people continue smoking even though it provides an unpleasant sensation in the beginning because people learn how to smoke by having other persons interpret the experience for them and teach them how to enjoy the desirable sensation forgetting the undesirable. It seems that people learn smoking as a social activity, and it often originates in peer groups. It typically takes birth in adolescent groups who are highly likely to imitate adults to look mature. It is often used as a weapon to impress others. However, sooner or later, they start smoking even when they are alone, and develop their dependence on nicotine. In an interview with Gilchrist (How Best to Quit Smoking: Interview with Dr. Randy Gilchrist), he pointed out that people continue to smoke even when they know that their health is eroding because for the smokers, the smoking habit is connected with their many everyday activities and emotional states. Many of these activities act as the triggers to smoke, and in his words, for them, cigarette is something like a ‘reliable old friend that offers relaxation, comfort and focus’ (How best to quit smoking, Interview with Dr. Randy Gilchrist). According to Butler and Hope (362), there are seven reasons that can be pointed out at this juncture. The first one is that some people feel good using cigarette. It might be the feeling of social acceptance. According to some others it is the best way for relaxation. Yet another category is fond of the taste of cigarettes. Another vital revelation is that cigarette offers

Sunday, September 22, 2019

Paul Farmer Essay Example for Free

Paul Farmer Essay How far will one man go to achieve the impossible? How far can one man go to reach his goal? The country of Haiti is one of the most under developed countries in the world, but one doctor, Paul Farmer, is determined to help cure this country. No matter what the costs, Paul Farmer is willing to do whatever it takes to help those in need. A doctor who has graduated from Harvard, is also the founder of Partners in Health, and is also a teacher at Harvard. In the book Mountains Beyond Mountains, by Tracy Kidder, Kidder shows the perseverance, determination, and courage of Paul Farmer and how he tries to help cure an under developed country and how he treats his patients. Paul Farmer isn’t like any other doctor. He treats his patients with more care than an average doctor. In chapter 2 of Mountains Beyond Mountains, Paul Farmer is seen with a patient in Boston who is HIV positive. The patient, named Joe, doesn’t eat properly and doesn’t take his medication and is also a drug addict. Though Joe doesn’t follow the protocol of being healthy, Paul Farmer insists that he treats Joe with respect and care. The patient only wants to drink and wants somebody to take care of him. Farmer took the patient to a homeless shelter. During Christmas time, Farmer bought Joe a six pack of beer for Christmas as a gift. On a message board at the hospital, read â€Å" OUT- cold, their drugs, ? gal. vodka, IN- warm, our drugs, 6 pack bud. † (pg. 15) At the bottom of that message also read â€Å"Why do I know Paul Farmer wrote this? (Pg. 5) This shows how Dokte Paul Farmer cares for his patients and how the other doctors are used to his generosity with their patients. Not many doctors would go to the extreme in taking care of one patient, or bending the rules to suit a patient’s needs. This act of kindness shows how much Paul Farmer cares about his patients and the people of Haiti. Paul Farmer treats everybody equally, whether they’d be rich or poor. The US Army is in Haiti to reinstate the country’s democratically elected government and to take away the power of the Junta that is ruling the country of Haiti with cruelty. In chapter 1, Paul Farmer despises how Captain Carroll and his men release Nerva Juste, a sheriff who is accused of beheading the assistant mayor of Mirebalais. Because there wasn’t any hard evidence of Nerva killing the assistant mayor, the US army had to let him go. Paul Farmer says to Captain Carroll, â€Å"Two clear sides existed in Haiti, the forces of repression and the Haitian poor, the vast majority†¦it still seems fuzzy which side the American soldiers are on† (pg. 15). This shows that Farmer is defending the poor Haitians

Saturday, September 21, 2019

Advances in DNA Sequencing Technologies

Advances in DNA Sequencing Technologies Abstract Recent advances in DNA sequencing technologies have led to efficient methods for determining the sequence of DNA. DNA sequencing was born in 1977 when Sanger et al proposed the chain termination method and Maxam and Gilbert proposed their own method in the same year. Sangers method was proven to be the most favourable out of the two. Since the birth of DNA sequencing, efficient DNA sequencing technologies was being produced, as Sangers method was laborious, time consuming and expensive; Hood et al proposed automated sequencers involving dye-labelled terminators. Due to the lack of available computational power prior to 1995, sequencing an entire bacterial genome was considered out of reach. This became a reality when Venter and Smith proposed shotgun sequencing in 1995. Pyrosequencing was introduced by Ronagi in 1996 and this method produce the sequence in real-time and is applied by 454 Life Sciences. An indirect method of sequencing DNA was proposed by Drmanac in 1987 called sequen cing by hybridisation and this method lead to the DNA array used by Affymetrix. Nanopore sequencing is a single-molecule sequencing technique and involves single-stranded DNA passing through lipid bilayer via an ion channel, and the ion conductance is measured. Synthetic Nanopores are being produced in order to substitute the lipid bilayer. Illumina sequencing is one of the latest sequencing technologies to be developed involving DNA clustering on flow cells and four dye-labelled terminators performing reverse termination. DNA sequencing has not only been applied to sequence DNA but applied to the real world. DNA sequencing has been involved in the Human genome project and DNA fingerprinting. Introduction Reliable DNA sequencing became a reality in 1977 when Frederick Sanger who perfected the chain termination method to sequence the genome of bacteriophage ?X174 [1][2]. Before Sangers proposal of the chain termination method, there was the plus and minus method, also presented by Sanger along with Coulson [2]. The plus and minus method depended on the use of DNA polymerase in transcribing the specific sequence DNA under controlled conditions. This method was considered efficient and simple, however it was not accurate [2]. As well as the proposal of the chain termination sequencing by Sanger, another method of DNA sequencing was introduced by Maxam and Gilbert involving restriction enzymes, which was also reported in 1977, the same year as Sangers method. The Maxamm and Gilbert method shall be discussed in more detail later on in this essay. Since the proposal of these two methods, spurred many DNA sequencing methods and as the technology developed, so did DNA sequencing. In this lite rature review, the various DNA sequencing technologies shall be looked into as well their applications in the real world and the tools that have aided sequencing DNA e.g. PCR. This review shall begin with the discussion of the chain termination method by Sanger. The Chain Termination Method Sanger discovered that the inhibitory activity of 23-didoxythymidine triphosphate (ddTTP) on the DNA polymerase I was dependent on its incorporation with the growing oligonucleotide chain in the place of thymidylic acid (dT) [2]. In the structure of ddT, there is no 3-hydroxyl group, by there is a hydrogen group in place. With the hydrogen in place of the hydroxyl group, the chain cannot be extended any further, so a termination occurs at the position where dT is positioned. Figure 1 shows the structure of dNTP and ddNTP. Sanger discovered that the inhibitory activity of 23-didoxythymidine triphosphate (ddTTP) on the DNA polymerase I was dependent on its incorporation with the growing oligonucleotide chain in the place of thymidylic acid (dT) [2]. In the structure of ddT, there is no 3-hydroxyl group, by there is a hydrogen group in place. With the hydrogen in place of the hydroxyl group, the chain cannot be extended any further, so a termination occurs at the position where dT is positioned. Figure 1 shows the structure of dNTP and ddNTP. In order to remove the 3-hydroxyl group and replace it with a proton, the triphosphate has to undergo a chemical procedure [1]. There is a different procedure employed for each of the triphosphate groups. Preparation of ddATP was produced from the starting material of 3-O-tosyl-2-deoxyadenosine which was treated with sodium methoxide in dimethylformamide to produce 2,3-dideoxy-2,3-didehydroadenosine, which is an unsaturated compound [4]. The double bond between carbon 2 and 3 of the cyclic ether was then hydrogenated with a palladium-on-carbon catalyst to give 2,3-dideoxyadenosine (ddA). The ddA (ddA) was then phosphorylated in order add the triphosphate group. Purification then took place on DEAE-Sephadex column using a gradient of triethylamine carbonate at pH 8.4. Figure 2 is schematic representation to produce ddA prior to phosphorylation. In the preparation of ddTTP (Figure 3), thymidine was tritylated (+C(Ph3)) at the 5-position and a methanesulphonyl (+CH3SO2) group was introduced at the 3-OH group[5]. The methanesulphonyl group was substituted with iodine by refluxing the compound in 1,2-dimethoxythane in the presence of NaI. After chromatography on a silica column the 5-trityl-3-iodothymidine was hydrogenated in 80% acetic acid to remove the trityl group. The resultant 3-iodothymidine was hydrogenated to produce 23-dideoxythymidine which subsequently was phosphorylated. Once phosphorylated, ddTTP was then purified on a DEAE-sephadex column with triethylammonium-hydrogen carbonate gradient. Figure 3 is a schematic representation to produce ddT prior phosphorylation. When preparing ddGTP, the starting material was N-isobutyryl-5-O-monomethoxytrityldepxyguanosine [1]. After the tosylation of the 3-OH group the compound was then converted to the 23-didehydro derivative with sodium methoxide. Then the isobutyryl group was partly removed during this treatment of sodium methoxide and was removed completely by incubation in the presence of NH3 overnight at 45oC. During the overnight incubation period, the didehydro derivative was reduced to the dideoxy derivative and then converted to the triphosphate. The triphosphate was purified by the fractionation on a DEAE-Sephadex column using a triethylamine carbonate gradient. Figure 4 is a schematic representation to produce ddG prior phosphorylation. Preparing the ddCTP was similar to ddGTP, but was prepared from N-anisoyl-5-O-monomethoxytrityldeoxycytidine. However the purification process was omitted for ddCTP, as it produced a very low yield, therefore the solution was used directly in the experiment described in the paper [2]. Figure 5 is a schematic representation to produce ddC prior phosphorylation. With the four dideoxy samples now prepared, the sequencing procedure can now commence. The dideoxy samples are in separate tubes, along with restriction enzymes obtained from ?X174 replicative form and the four dNTPs [2]. The restriction enzymes and the dNTPs begin strand synthesis and the ddNTP is incorporated to the growing polynucleotide and terminates further strand synthesis. This is due to the lack of the hydroxyl group at the 3 position of ddNTP which prevents the next nucleotide to attach onto the strand. The four tubes are separate by gel-electrophoresis on acrylamide gels (see Gel-Electrophoresis). Figure 6 shows the sequencing procedure. Reading the sequence is straightforward [1]. The first band that moved the furthest is located, this represents the smallest piece of DNA and is the strand terminated by incorporation of the dideoxynucleotide at the first position in the template. The track in which this band occurs is noted. For example (shown in Figure 6), the band that moved the furthest is in track A, so the first nucleotide in the sequence is A. To find out what the next nucleotide, the next most mobile band corresponding to DNA molecule which is one nucleotide longer than the first, and in this example, the band is on track T. Therefore the second nucleotide is T, and the overall sequence so far is AT. The processed is carried on along the autoradiograph until the individual bands start to close in and become inseparable, therefore becoming hard to read. In general it is possible to read upto 400 nucleotides from one autoradiograph with this method. Figure 7 is a schematic representation of an autoradiograph. E ver since Sanger perfected the method of DNA sequencing, there have been advances methods of sequencing along with the achievements. Certain achievements such as the Human genome project and shall be discussed later on in this review. Gel-Electrophoresis Gel-Electrophoresis is defined as the movement of charged molecules in an electric field [1][8]. DNA molecules, like many other biological compounds carry an electric charge. With the case of DNA, this charge is negative. Therefore when DNA is placed in an electric field, they migrate towards the positive pole (as shown in figure 8). There are three factors which affect the rate of migration, which are shape, electrical charge and size. The polyacrylamide gel comprises a complex network of pores through which the molecules must travel to reach the anode. Maxam and Gilbert Method The Maxam and Gilbert method was proposed before Sanger Method in the same year. While the Sangers method involves enzymatic radiolabelled fragments from unlabelled DNA strands [2]. The Maxam-Gilbert method involves chemical cleavage of prelabelled DNA strands in four different ways to form the four different collections of labelled fragments [6][7]. Both methods use gel-electrophoresis to separate the DNA target molecules [8]. However Sangers Chain Termination method has been proven to be simpler and easier to use than the Maxam and Gilbert method [9]. As a matter of fact, looking through the literature text books, Sangers method of DNA sequencing have been explained rather than Maxam and Gilberts [1][3][9][10]. With Maxam and Gilberts method there are two chemical cleavage reactions that take place [6][7]. One of the chemical reaction take places with guanine and the adenine, which are the two purines and the other cleaves the DNA at the cytosine and thymin e, the pyrimidines. For the cleavage reaction, specific reagents are used for each of the reaction. The purine specific reagent is dimethyl sulphate and the pyrimidine specific reagent is hydrazine. Each of these reactions are done in a different way, as each of the four bases have different chemical properties. The cleavage reaction for the guanine/adenine involves using dimethyl sulphate to add a methyl group to the guanines at the N7 position and at the N3 position at the adenines [7]. The glycosidic bond of a methylated adenines is unstable and breaks easily on heating at neutral pH, leaving the sugar free. Treatment with 0.1M alkali at 90oC then will cleave the sugar from the neighbouring phosphate groups. When the resulting end-labelled fragments are resolved on a polyacrylamide gel, the autoradiograph contains a pattern a dark and light bands. The dark bands arise from the breakage at the guanines, which methylate at a rate which is 5-fold faster than adenines. From this reac tion the guanine appear stronger than the adenosine, this can lead to a misinterpretation. Therefore an Adenine-Enhanced cleavage reaction takes place. Figure 9 shows the structural changes of guanine when undergoing the structural modifications involved in Maxam-Gilbert sequencing. With an Adenine-Enhanced cleavage, the glycosidic bond of methylated adenosine is less stable than that of methylated guanosine, thus gentle treatment with dilute acid at the methylation step releases the adenine, allowing darker bands to appear on the autoradiograph [7]. The chemical cleavage for the cytosine and thymine residues involves hydrazine instead of dimethyl sulphate. The hydrazine cleaves the base and leaving ribosylurea [7]. After partial hydrazinolysis in 15-18M aqueous hydrazine at 20oC, the DNA is cleaved with 0.5M piperidine. The piperidine (a cyclic secondary amine), as the free base, displaces all the products of the hydrazine reaction from the sugars and catalyzses the b-elimination of the phosphates. The final pattern contains bands of the similar intensity from the cleavages at the cytosines and thymines. As for cleavage for the cytosine, the presence of 2M NaCl preferentially suppresses the reaction of thymine with hydrazine. Once the cleavage reaction has taken place each original strand is broken into a labelled fragment and an unlabelled fragment [7]. All the labelled fragments start at the 5 end of the strand and terminate at the base that precedes the site of a nucleotide along the original strand. Only the labelled fragmen ts are recorded on the gel electrophoresis. Dye-labelled terminators For many years DNA sequencing has been done by hand, which is both laborious and expensive[3]. Before automated sequencing, about 4 x 106 bases of DNA had been sequenced after the introduction of the Sangers method and Maxam Gilbert methods [11]. In both methods, four sets of reactions and a subsequent electrophoresis step in adjacent lanes of a high-resolution polyacrylamide gel. With the new automated sequencing procedures, four different fluorophores are used, one in each of the base-specific reactions. The reaction products are combined and co-electrophoresed, and the DNA fragments generated in each reaction are detected near the bottom of the gel and identified by their colour. As for choosing which DNA sequencing method to be used, Sangers Method was chosen. This is because Sangers method has been proven to be the most durable and efficient method of DNA sequencing and was the choice of most investigators in large scale sequencing [12]. Figure 10 shows a typical sequence is ge nerated using an automated sequencer. The selection of the dyes was the central development of automated DNA sequencing [11]. The fluorophores that were selected, had to meet several criteria. For instance the absorption and emission maxima had to be in the visible region of the spectrum [11] which is between 380 nm and 780 nm [10], each dye had to be easily distinguishable from one another [11]. Also the dyes should not impair the hybridisation of the oligonucleotide primer, as this would decrease the reliability of synthesis in the sequencing reactions. Figure 11 shows the structures of the dyes which are used in a typical automated sequencing procedure, where X is the moiety where the dye will be bound to. Table 1 shows which dye is covalently attached to which nucleotide in a typical automated DNA sequencing procedure Dye Nucleotide Attached Flourescein Adenosine NBD Thymine Tetramethylrhodamine Guanine Texas Red Cytosine In designing the instrumentation of the florescence detection apparatus, the primary consideration was sensitivity. As the concentration of each band on the co-electrophoresis gel is around 10 M, the instrument needs to be capable of detecting dye concentration of that order. This level of detection can readily be achieved by commercial spectrofluorimeter systems. Unfortunately detection from a gel leads to a much higher background scatter which in turn leads to a decrease in sensitivity. This is solved by using a laser excitation source in order to obtain maximum sensitivity [11]. Figure 12 is schematic diagram of the instrument with the explanation of the instrumentation employed. When analyzing data, Hood had found some complications [11]. Firstly the emission spectra of the different dyes overlapped, in order to overcome this, multicomponent analysis was employed to determine the different amounts of the four dyes present in the gel at any given time. Secondly, the different dye molecules impart non-identical electrophoretic mobilities to the DNA fragments. This meant that the oligonucleotides were not equal base lengths. The third major complication was in analyzing the data comes from the imperfections of the enzymatic methods, for instance there are often regions of the autoradiograph that are difficult to sequence. These complications were overcome in five steps [11] High frequency noise is removed by using a low-pass Fourier filter. A time delay (1.5-4.5 s) between measurements at different wavelength is partially corrected for by linear interpolation between successive measurements. A multicomponent analysis is performed on each set of four data points; this computation yields the amount of each of the four dyes present in the detector as a function of time. The peaks present in the data are located The mobility shift introduced by the dyes is corrected for using empirical determined correction factors. Since the publication of Hoods proposal of the fluorescence detection in automated DNA sequence analysis. Research has been made on focussed on developing which are better in terms of sensitivity [12]. Bacterial and Viral Genome Sequencing (Shotgun Sequencing) Prior to 1995, many viral genomes have been sequenced using Sangers chain termination technique [13], but no bacterial genome has been sequenced. The viral genomes that been sequenced are the 229 kb genome of cytomegalovirus [14], and the 192 kb genome of vaccinia [15], the 187 kb mitochondrial and 121 kb cholorophast genomes of Marchantia polymorpha have been sequenced [16]. Viral genome sequencing has been based upon the sequencing of clones usually derived from extensively mapped restriction fragments, or ? or cosmid clones [17]. Despite advances in DNA sequencing technology, the sequencing of genomes has not progressed beyond clones on the order of the size of the ~ 250kb, which is due to the lack of computational approaches that would enable the efficient assembly of a large number of fragments into an ordered single assembly [13][17]. Upon this, Venter and Smith in 1995 proposed Shotgun Sequencing and enabled Haemophilus influenzae (H. influenzae) to become the first bacterial genome to be sequenced [13][17]. H. influenzae was chosen as it has a similar base composition as a human does with 38 % of sequence made of G + C. Table 2 shows the procedure of the Shotgun Sequencing [17]. When constructing the library ultrasonic waves were used to randomly fragment the genomic DNA into fairly small pieces of about the size of a gene [13]. The fragments were purified and then attached to plasmid vectors[13][17]. The plasmid vectors were then inserted into an E. coli host cell to produce a library of plasmid clones. The E. coli host cell strains had no restriction enzymes which prevented any deletions, rearrangements and loss of the clones [17]. The fragments are randomly sequenced using automated sequencers (Dye-Labelled terminators), with the use of T7 and SP6 primers to sequence the ends of the inserts to enable the coverage of fragments by a factor of 6 [17]. Table 2 (Reference 17) Stage Description Random small insert and large insert library construction Shear genomic DNA randomly to ~2 kb and 15 to 20 kb respectively Library plating Verify random nature of library and maximize random selection of small insert and large insert clones for template production High-throughput DNA sequencing Sequence sufficient number of sequences fragments from both ends for 6x coverage Assembly Assemble random sequence fragments and identity repeat regions Gap Closure Physical gaps Order all contigs (fingerprints, peptide links, ÃŽ », clones, PCR) and provide templates for closure Sequence gaps Complete the genome sequence by primer walking Editing Inspect the sequence visually and resolve sequence ambiguities, including frameshifts Annotation Identify and describe all predicted coding regions (putative identifications, starts and stops, role assignments, operons, regulatory regions) Once the sequencing reaction has been completed, the fragments need to be assembled, and this process is done by using the software TIGR Assembler (The Institute of Genomic Research) [17]. The TIGR Assembler simultaneously clusters and assembles fragments of the genome. In order to obtain the speed necessary to assemble more than 104 fragments [17], an algorithm is used to build up the table of all 10-bp oligonucleotide subsequences to generate a list of potential sequence fragment overlaps. The algorithm begins with the initial contig (single fragment); to extend the contig, a candidate fragment is based on the overlap oligonucleotide content. The initial contig and candidate fragment are aligned by a modified version of the Smith-Waterman [18] algorithm, which allows optional gapped alignments. The contig is extended by the fragment only if strict criteria of overlap content match. The algorithm automatically lowers these criteria in regions of minimal coverage and raises them in r egions with a possible repetitive element [17]. TIGR assembler is designed to take advantage of huge clone sizes [17]. It also enforces a constraint that sequence from two ends of the same template point toward one another in the contig and are located within a certain range of the base pair [17]. Therefore the TIGR assembler provides the computational power to assemble the fragments. Once the fragments have been aligned, the TIGR Editor is used to proofread the sequence and check for any ambiguities in the data [17]. With this technique it does required precautionary care, for instance the small insert in the library should be constructed and end-sequenced concurrently [17]. It is essential that the sequence fragments are of the highest quality and should be rigorously check for any contamination [17]. Pyrosequencing Most of the DNA sequencing required gel-electrophoresis, however in 1996 at the Royal Institute of Technology, Stockholm, Ronaghi proposed Pyrosequencing [19][20]. This is an example of sequencing-by-synthesis, where DNA molecules are clonally amplified on a template, and this template then goes under sequencing [25]. This approach relies on the detection of DNA polymerase activity by enzymatic luminometric inorganic pyrophosphate (PPi) that is released during DNA synthesis and goes under detection assay and offers the advantage of real-time detection [19]. Ronaghi used Nyren [21] description of an enzymatic system consisting of DNA polymerase, ATP sulphurylase and lucifinerase to couple the release of PPi obtained when a nucleotide is incorporated by the polymerase with light emission that can be easily detected by a luminometer or photodiode [20]. When PPi is released, it is immediately converted to adenosine triphosphate (ATP) by ATP sulphurylase, and the level of generated ATP is sensed by luciferase-producing photons [19][20][21]. The unused ATP and deoxynucleotide are degraded by the enzyme apyrase. The presence or absence of PPi, and therefore the incorporation or nonincorporation of each nucleotide added, is ultimately assessed on the basis of whether or not the photons are detected. There is minimal time lapse between these events, and the conditions of the reaction are such that iterative addition of the nucleotides and PPi detection are possible. The release of PPi via the nucleotide incorporation, it is detected by ELIDA (Enzymatic Luminometric Inorganic pyrophosphate Detection Assay) [19][21]. It is within the ELIDA, the PPi is converted to ATP, with the help of ATP sulfurylase and the ATP reacts with the luciferin to generate the light at more than 6 x 109 photons at a wavelength of 560 nm which can be detected by a photodiode, photomultiplier tube, or charge-coupled device (CCD) camera [19][20]. As mentioned before, the DNA molecules need to be amplified by polymerase chain reaction (PCR which is discussed later Ronaghi observed that dATP interfered with the detection system [19]. This interference is a major problem when the method is used to detect a single-base incorporation event. This problem was rectified by replacing the dATP with dATPaS (deoxyadenosine a–thiotrisulphate). It is noticed that adding a small amount of the dATP (0.1 nmol) induces an instantaneous increase in the light emission followed by a slow decrease until it reached a steady-state level (as Figure 11 shows). This makes it impossible to start a sequencing reaction by adding dATP; the reaction must instead be started by addition of DNA polymerase. The signal-to-noise ratio also became higher for dATP compared to the other nucleotides. On the other hand, addition of 8 nmol dATPaS (80-fold higher than the amount of dATP) had only a minor effect on luciferase (as Figure 14 shows). However dATPaS is less than 0.05% as effective as dATP as a substrate for luciferase [19]. Pyrosequencing is adapted by 454 Life Sciences for sequencing by synthesis [22] and is known as the Genome Sequencer (GS) FLX [23][24]. The 454 system consist of random ssDNA (single-stranded) fragments, and each random fragment is bound to the bead under conditions that allow only one fragment to a bead [22]. Once the fragment is attached to the bead, clonal amplification occurs via emulsion. The emulsified beads are purified and placed in microfabricated picolitre wells and then goes under pyrosequencing. A lens array in the detection of the instrument focuses luminescene from each well onto the chip of a CCD camera. The CCD camera images the plate every second in order to detect progression of the pyrosequencing [20][22]. The pyrosequencing machine generates raw data in real time in form of bioluminescence generated from the reactions, and data is presented on a pyrogram [20] Sequencing by Hybridisation As discussed earlier with chain-termination, Maxamm and Gilbert and pyrosequencing, these are all direct methods of sequencing DNA, where each base position is determined individually [26]. There are also indirect methods of sequencing DNA in which the DNA sequence is assembled based on experimental determination of oligonucleotide content of the chain. One promising method of indirect DNA sequencing is called Sequencing by Hybridisation in which sets of oligonucleotide probes are hybridised under conditions that allow the detection of complementary sequences in the target nucleic acid [26]. Sequencing by Hybridisation (SBH) was proposed by Drmanac et al in 1987 [27] and is based on Dotys observation that when DNA is heated in solution, the double-strand melts to form single stranded chains, which then re-nature spontaneously when the solution is cooled [28]. This results the possibility of one piece of DNA recognize another. And hence lead to Drmanac proposal of oligonucleotides pro bes being hybridised under these conditions allowing the complementary sequence in the DNA target to be detected [26][27]. In SBH, an oligonucleotide probe (n-mer probe where n is the length of the probe) is a substring of a DNA sample. This process is similar to doing a keyword search in a page full of text [29]. The set of positively expressed probes is known as the spectrum of DNA sample. For example, the single strand DNA 5GGTCTCG 3 will be sequenced using 4-mer probes and 5 probes will hybridise onto the sequence successfully. The remaining probes will form hybrids with a mismatch at the end base and will be denatured during selective washing. The five probes that are of good match at the end base will result in fully matched hybrids, which will be retained and detected. Each positively expressed serves as a platform to decipher the next base as is seen in Figure 16. For the probes that have successfully hybridised onto the sequence need to be detected. This is achieved by labelling the probes with dyes such as Cyanine3 (Cy3) and Cyanine5 (Cy5) so that the degree of hybridisation can be detected by imaging devices [29]. SBH methods are ideally suited to microarray technology due to their inherent potential for parallel sample processing [29]. An important advantage of using of using a DNA array rather than a multiple probe array is that all the resulting probe-DNA hybrids in any single probe hybridisation are of identical sequence [29]. One of main type of DNA hybridisation array formats is oligonucleotide array which is currently patented by Affymetrix [30]. The commercial uses of this shall be discussed under application of the DNA Array (Affymetrix). Due to the small size of the hybridisation array and the small amount of the target present, it is a challenge to acquire the signals from a DNA Array [29]. These signals must first be amplified b efore they can be detected by the imaging devices. Signals can be boosted by the two means; namely target amplification and signal amplification. In target amplification such as PCR, the amount of target is increased to enhance signal strength while in signal amplification; the amount of signal per unit is increased. Nanopore Sequencing Nanopore sequencing was proposed in 1996 by Branton et al, and shows that individual polynucleotide molecules can be characterised using a membrane channel [31]. Nanopore sequencing is an example of single-molecule sequencing, in which the concept of sequencing-by-synthesis is followed, but without the prior amplification step [24]. This is achieved by the measurement of ionic conductance of a nucleotide passing through a single ion channels in biological membranes or planar lipid bilayer. The measurement of ionic conductance is routine neurobiology and biophysics [31], as well as pharmacology (Ca+ and K+ channel)[32] and biochemistry[9]. Most channels undergo voltage-dependant or ligand dependant gating, there are several large ion channels (i.e. Staphylococcus aureus a-hemolysin) which can remain open extended periods, thereby allowing continuous ionic current to flow across a lipid bilayer [31]. If a transmembrane voltage applied across an open channel of appropriate size should d raw DNA molecules through the channel as extended linear chains whose presence would detect reduce ionic flow. It was assumed, that the reduction in the ionic flow would lead to single channel recordings to characterise the length and hence lead to other characteristics of the polynucleotide. In the proposal by Branton, a-hemolysin was used to form a single channel across a lipid bilayer separating two buffer-filled compartment [31]. a-Hemolysin is a monomeric, 33kD, 293 residue protein that is secreted by the human pathogen Staphylococcus aureus [33]. The nanopore are produced when a-hemolysin subsunits are introduced into a buffered solution that separates lipid bilayer into two compartments (known as cis and trans): the head of t Advances in DNA Sequencing Technologies Advances in DNA Sequencing Technologies Abstract Recent advances in DNA sequencing technologies have led to efficient methods for determining the sequence of DNA. DNA sequencing was born in 1977 when Sanger et al proposed the chain termination method and Maxam and Gilbert proposed their own method in the same year. Sangers method was proven to be the most favourable out of the two. Since the birth of DNA sequencing, efficient DNA sequencing technologies was being produced, as Sangers method was laborious, time consuming and expensive; Hood et al proposed automated sequencers involving dye-labelled terminators. Due to the lack of available computational power prior to 1995, sequencing an entire bacterial genome was considered out of reach. This became a reality when Venter and Smith proposed shotgun sequencing in 1995. Pyrosequencing was introduced by Ronagi in 1996 and this method produce the sequence in real-time and is applied by 454 Life Sciences. An indirect method of sequencing DNA was proposed by Drmanac in 1987 called sequen cing by hybridisation and this method lead to the DNA array used by Affymetrix. Nanopore sequencing is a single-molecule sequencing technique and involves single-stranded DNA passing through lipid bilayer via an ion channel, and the ion conductance is measured. Synthetic Nanopores are being produced in order to substitute the lipid bilayer. Illumina sequencing is one of the latest sequencing technologies to be developed involving DNA clustering on flow cells and four dye-labelled terminators performing reverse termination. DNA sequencing has not only been applied to sequence DNA but applied to the real world. DNA sequencing has been involved in the Human genome project and DNA fingerprinting. Introduction Reliable DNA sequencing became a reality in 1977 when Frederick Sanger who perfected the chain termination method to sequence the genome of bacteriophage ?X174 [1][2]. Before Sangers proposal of the chain termination method, there was the plus and minus method, also presented by Sanger along with Coulson [2]. The plus and minus method depended on the use of DNA polymerase in transcribing the specific sequence DNA under controlled conditions. This method was considered efficient and simple, however it was not accurate [2]. As well as the proposal of the chain termination sequencing by Sanger, another method of DNA sequencing was introduced by Maxam and Gilbert involving restriction enzymes, which was also reported in 1977, the same year as Sangers method. The Maxamm and Gilbert method shall be discussed in more detail later on in this essay. Since the proposal of these two methods, spurred many DNA sequencing methods and as the technology developed, so did DNA sequencing. In this lite rature review, the various DNA sequencing technologies shall be looked into as well their applications in the real world and the tools that have aided sequencing DNA e.g. PCR. This review shall begin with the discussion of the chain termination method by Sanger. The Chain Termination Method Sanger discovered that the inhibitory activity of 23-didoxythymidine triphosphate (ddTTP) on the DNA polymerase I was dependent on its incorporation with the growing oligonucleotide chain in the place of thymidylic acid (dT) [2]. In the structure of ddT, there is no 3-hydroxyl group, by there is a hydrogen group in place. With the hydrogen in place of the hydroxyl group, the chain cannot be extended any further, so a termination occurs at the position where dT is positioned. Figure 1 shows the structure of dNTP and ddNTP. Sanger discovered that the inhibitory activity of 23-didoxythymidine triphosphate (ddTTP) on the DNA polymerase I was dependent on its incorporation with the growing oligonucleotide chain in the place of thymidylic acid (dT) [2]. In the structure of ddT, there is no 3-hydroxyl group, by there is a hydrogen group in place. With the hydrogen in place of the hydroxyl group, the chain cannot be extended any further, so a termination occurs at the position where dT is positioned. Figure 1 shows the structure of dNTP and ddNTP. In order to remove the 3-hydroxyl group and replace it with a proton, the triphosphate has to undergo a chemical procedure [1]. There is a different procedure employed for each of the triphosphate groups. Preparation of ddATP was produced from the starting material of 3-O-tosyl-2-deoxyadenosine which was treated with sodium methoxide in dimethylformamide to produce 2,3-dideoxy-2,3-didehydroadenosine, which is an unsaturated compound [4]. The double bond between carbon 2 and 3 of the cyclic ether was then hydrogenated with a palladium-on-carbon catalyst to give 2,3-dideoxyadenosine (ddA). The ddA (ddA) was then phosphorylated in order add the triphosphate group. Purification then took place on DEAE-Sephadex column using a gradient of triethylamine carbonate at pH 8.4. Figure 2 is schematic representation to produce ddA prior to phosphorylation. In the preparation of ddTTP (Figure 3), thymidine was tritylated (+C(Ph3)) at the 5-position and a methanesulphonyl (+CH3SO2) group was introduced at the 3-OH group[5]. The methanesulphonyl group was substituted with iodine by refluxing the compound in 1,2-dimethoxythane in the presence of NaI. After chromatography on a silica column the 5-trityl-3-iodothymidine was hydrogenated in 80% acetic acid to remove the trityl group. The resultant 3-iodothymidine was hydrogenated to produce 23-dideoxythymidine which subsequently was phosphorylated. Once phosphorylated, ddTTP was then purified on a DEAE-sephadex column with triethylammonium-hydrogen carbonate gradient. Figure 3 is a schematic representation to produce ddT prior phosphorylation. When preparing ddGTP, the starting material was N-isobutyryl-5-O-monomethoxytrityldepxyguanosine [1]. After the tosylation of the 3-OH group the compound was then converted to the 23-didehydro derivative with sodium methoxide. Then the isobutyryl group was partly removed during this treatment of sodium methoxide and was removed completely by incubation in the presence of NH3 overnight at 45oC. During the overnight incubation period, the didehydro derivative was reduced to the dideoxy derivative and then converted to the triphosphate. The triphosphate was purified by the fractionation on a DEAE-Sephadex column using a triethylamine carbonate gradient. Figure 4 is a schematic representation to produce ddG prior phosphorylation. Preparing the ddCTP was similar to ddGTP, but was prepared from N-anisoyl-5-O-monomethoxytrityldeoxycytidine. However the purification process was omitted for ddCTP, as it produced a very low yield, therefore the solution was used directly in the experiment described in the paper [2]. Figure 5 is a schematic representation to produce ddC prior phosphorylation. With the four dideoxy samples now prepared, the sequencing procedure can now commence. The dideoxy samples are in separate tubes, along with restriction enzymes obtained from ?X174 replicative form and the four dNTPs [2]. The restriction enzymes and the dNTPs begin strand synthesis and the ddNTP is incorporated to the growing polynucleotide and terminates further strand synthesis. This is due to the lack of the hydroxyl group at the 3 position of ddNTP which prevents the next nucleotide to attach onto the strand. The four tubes are separate by gel-electrophoresis on acrylamide gels (see Gel-Electrophoresis). Figure 6 shows the sequencing procedure. Reading the sequence is straightforward [1]. The first band that moved the furthest is located, this represents the smallest piece of DNA and is the strand terminated by incorporation of the dideoxynucleotide at the first position in the template. The track in which this band occurs is noted. For example (shown in Figure 6), the band that moved the furthest is in track A, so the first nucleotide in the sequence is A. To find out what the next nucleotide, the next most mobile band corresponding to DNA molecule which is one nucleotide longer than the first, and in this example, the band is on track T. Therefore the second nucleotide is T, and the overall sequence so far is AT. The processed is carried on along the autoradiograph until the individual bands start to close in and become inseparable, therefore becoming hard to read. In general it is possible to read upto 400 nucleotides from one autoradiograph with this method. Figure 7 is a schematic representation of an autoradiograph. E ver since Sanger perfected the method of DNA sequencing, there have been advances methods of sequencing along with the achievements. Certain achievements such as the Human genome project and shall be discussed later on in this review. Gel-Electrophoresis Gel-Electrophoresis is defined as the movement of charged molecules in an electric field [1][8]. DNA molecules, like many other biological compounds carry an electric charge. With the case of DNA, this charge is negative. Therefore when DNA is placed in an electric field, they migrate towards the positive pole (as shown in figure 8). There are three factors which affect the rate of migration, which are shape, electrical charge and size. The polyacrylamide gel comprises a complex network of pores through which the molecules must travel to reach the anode. Maxam and Gilbert Method The Maxam and Gilbert method was proposed before Sanger Method in the same year. While the Sangers method involves enzymatic radiolabelled fragments from unlabelled DNA strands [2]. The Maxam-Gilbert method involves chemical cleavage of prelabelled DNA strands in four different ways to form the four different collections of labelled fragments [6][7]. Both methods use gel-electrophoresis to separate the DNA target molecules [8]. However Sangers Chain Termination method has been proven to be simpler and easier to use than the Maxam and Gilbert method [9]. As a matter of fact, looking through the literature text books, Sangers method of DNA sequencing have been explained rather than Maxam and Gilberts [1][3][9][10]. With Maxam and Gilberts method there are two chemical cleavage reactions that take place [6][7]. One of the chemical reaction take places with guanine and the adenine, which are the two purines and the other cleaves the DNA at the cytosine and thymin e, the pyrimidines. For the cleavage reaction, specific reagents are used for each of the reaction. The purine specific reagent is dimethyl sulphate and the pyrimidine specific reagent is hydrazine. Each of these reactions are done in a different way, as each of the four bases have different chemical properties. The cleavage reaction for the guanine/adenine involves using dimethyl sulphate to add a methyl group to the guanines at the N7 position and at the N3 position at the adenines [7]. The glycosidic bond of a methylated adenines is unstable and breaks easily on heating at neutral pH, leaving the sugar free. Treatment with 0.1M alkali at 90oC then will cleave the sugar from the neighbouring phosphate groups. When the resulting end-labelled fragments are resolved on a polyacrylamide gel, the autoradiograph contains a pattern a dark and light bands. The dark bands arise from the breakage at the guanines, which methylate at a rate which is 5-fold faster than adenines. From this reac tion the guanine appear stronger than the adenosine, this can lead to a misinterpretation. Therefore an Adenine-Enhanced cleavage reaction takes place. Figure 9 shows the structural changes of guanine when undergoing the structural modifications involved in Maxam-Gilbert sequencing. With an Adenine-Enhanced cleavage, the glycosidic bond of methylated adenosine is less stable than that of methylated guanosine, thus gentle treatment with dilute acid at the methylation step releases the adenine, allowing darker bands to appear on the autoradiograph [7]. The chemical cleavage for the cytosine and thymine residues involves hydrazine instead of dimethyl sulphate. The hydrazine cleaves the base and leaving ribosylurea [7]. After partial hydrazinolysis in 15-18M aqueous hydrazine at 20oC, the DNA is cleaved with 0.5M piperidine. The piperidine (a cyclic secondary amine), as the free base, displaces all the products of the hydrazine reaction from the sugars and catalyzses the b-elimination of the phosphates. The final pattern contains bands of the similar intensity from the cleavages at the cytosines and thymines. As for cleavage for the cytosine, the presence of 2M NaCl preferentially suppresses the reaction of thymine with hydrazine. Once the cleavage reaction has taken place each original strand is broken into a labelled fragment and an unlabelled fragment [7]. All the labelled fragments start at the 5 end of the strand and terminate at the base that precedes the site of a nucleotide along the original strand. Only the labelled fragmen ts are recorded on the gel electrophoresis. Dye-labelled terminators For many years DNA sequencing has been done by hand, which is both laborious and expensive[3]. Before automated sequencing, about 4 x 106 bases of DNA had been sequenced after the introduction of the Sangers method and Maxam Gilbert methods [11]. In both methods, four sets of reactions and a subsequent electrophoresis step in adjacent lanes of a high-resolution polyacrylamide gel. With the new automated sequencing procedures, four different fluorophores are used, one in each of the base-specific reactions. The reaction products are combined and co-electrophoresed, and the DNA fragments generated in each reaction are detected near the bottom of the gel and identified by their colour. As for choosing which DNA sequencing method to be used, Sangers Method was chosen. This is because Sangers method has been proven to be the most durable and efficient method of DNA sequencing and was the choice of most investigators in large scale sequencing [12]. Figure 10 shows a typical sequence is ge nerated using an automated sequencer. The selection of the dyes was the central development of automated DNA sequencing [11]. The fluorophores that were selected, had to meet several criteria. For instance the absorption and emission maxima had to be in the visible region of the spectrum [11] which is between 380 nm and 780 nm [10], each dye had to be easily distinguishable from one another [11]. Also the dyes should not impair the hybridisation of the oligonucleotide primer, as this would decrease the reliability of synthesis in the sequencing reactions. Figure 11 shows the structures of the dyes which are used in a typical automated sequencing procedure, where X is the moiety where the dye will be bound to. Table 1 shows which dye is covalently attached to which nucleotide in a typical automated DNA sequencing procedure Dye Nucleotide Attached Flourescein Adenosine NBD Thymine Tetramethylrhodamine Guanine Texas Red Cytosine In designing the instrumentation of the florescence detection apparatus, the primary consideration was sensitivity. As the concentration of each band on the co-electrophoresis gel is around 10 M, the instrument needs to be capable of detecting dye concentration of that order. This level of detection can readily be achieved by commercial spectrofluorimeter systems. Unfortunately detection from a gel leads to a much higher background scatter which in turn leads to a decrease in sensitivity. This is solved by using a laser excitation source in order to obtain maximum sensitivity [11]. Figure 12 is schematic diagram of the instrument with the explanation of the instrumentation employed. When analyzing data, Hood had found some complications [11]. Firstly the emission spectra of the different dyes overlapped, in order to overcome this, multicomponent analysis was employed to determine the different amounts of the four dyes present in the gel at any given time. Secondly, the different dye molecules impart non-identical electrophoretic mobilities to the DNA fragments. This meant that the oligonucleotides were not equal base lengths. The third major complication was in analyzing the data comes from the imperfections of the enzymatic methods, for instance there are often regions of the autoradiograph that are difficult to sequence. These complications were overcome in five steps [11] High frequency noise is removed by using a low-pass Fourier filter. A time delay (1.5-4.5 s) between measurements at different wavelength is partially corrected for by linear interpolation between successive measurements. A multicomponent analysis is performed on each set of four data points; this computation yields the amount of each of the four dyes present in the detector as a function of time. The peaks present in the data are located The mobility shift introduced by the dyes is corrected for using empirical determined correction factors. Since the publication of Hoods proposal of the fluorescence detection in automated DNA sequence analysis. Research has been made on focussed on developing which are better in terms of sensitivity [12]. Bacterial and Viral Genome Sequencing (Shotgun Sequencing) Prior to 1995, many viral genomes have been sequenced using Sangers chain termination technique [13], but no bacterial genome has been sequenced. The viral genomes that been sequenced are the 229 kb genome of cytomegalovirus [14], and the 192 kb genome of vaccinia [15], the 187 kb mitochondrial and 121 kb cholorophast genomes of Marchantia polymorpha have been sequenced [16]. Viral genome sequencing has been based upon the sequencing of clones usually derived from extensively mapped restriction fragments, or ? or cosmid clones [17]. Despite advances in DNA sequencing technology, the sequencing of genomes has not progressed beyond clones on the order of the size of the ~ 250kb, which is due to the lack of computational approaches that would enable the efficient assembly of a large number of fragments into an ordered single assembly [13][17]. Upon this, Venter and Smith in 1995 proposed Shotgun Sequencing and enabled Haemophilus influenzae (H. influenzae) to become the first bacterial genome to be sequenced [13][17]. H. influenzae was chosen as it has a similar base composition as a human does with 38 % of sequence made of G + C. Table 2 shows the procedure of the Shotgun Sequencing [17]. When constructing the library ultrasonic waves were used to randomly fragment the genomic DNA into fairly small pieces of about the size of a gene [13]. The fragments were purified and then attached to plasmid vectors[13][17]. The plasmid vectors were then inserted into an E. coli host cell to produce a library of plasmid clones. The E. coli host cell strains had no restriction enzymes which prevented any deletions, rearrangements and loss of the clones [17]. The fragments are randomly sequenced using automated sequencers (Dye-Labelled terminators), with the use of T7 and SP6 primers to sequence the ends of the inserts to enable the coverage of fragments by a factor of 6 [17]. Table 2 (Reference 17) Stage Description Random small insert and large insert library construction Shear genomic DNA randomly to ~2 kb and 15 to 20 kb respectively Library plating Verify random nature of library and maximize random selection of small insert and large insert clones for template production High-throughput DNA sequencing Sequence sufficient number of sequences fragments from both ends for 6x coverage Assembly Assemble random sequence fragments and identity repeat regions Gap Closure Physical gaps Order all contigs (fingerprints, peptide links, ÃŽ », clones, PCR) and provide templates for closure Sequence gaps Complete the genome sequence by primer walking Editing Inspect the sequence visually and resolve sequence ambiguities, including frameshifts Annotation Identify and describe all predicted coding regions (putative identifications, starts and stops, role assignments, operons, regulatory regions) Once the sequencing reaction has been completed, the fragments need to be assembled, and this process is done by using the software TIGR Assembler (The Institute of Genomic Research) [17]. The TIGR Assembler simultaneously clusters and assembles fragments of the genome. In order to obtain the speed necessary to assemble more than 104 fragments [17], an algorithm is used to build up the table of all 10-bp oligonucleotide subsequences to generate a list of potential sequence fragment overlaps. The algorithm begins with the initial contig (single fragment); to extend the contig, a candidate fragment is based on the overlap oligonucleotide content. The initial contig and candidate fragment are aligned by a modified version of the Smith-Waterman [18] algorithm, which allows optional gapped alignments. The contig is extended by the fragment only if strict criteria of overlap content match. The algorithm automatically lowers these criteria in regions of minimal coverage and raises them in r egions with a possible repetitive element [17]. TIGR assembler is designed to take advantage of huge clone sizes [17]. It also enforces a constraint that sequence from two ends of the same template point toward one another in the contig and are located within a certain range of the base pair [17]. Therefore the TIGR assembler provides the computational power to assemble the fragments. Once the fragments have been aligned, the TIGR Editor is used to proofread the sequence and check for any ambiguities in the data [17]. With this technique it does required precautionary care, for instance the small insert in the library should be constructed and end-sequenced concurrently [17]. It is essential that the sequence fragments are of the highest quality and should be rigorously check for any contamination [17]. Pyrosequencing Most of the DNA sequencing required gel-electrophoresis, however in 1996 at the Royal Institute of Technology, Stockholm, Ronaghi proposed Pyrosequencing [19][20]. This is an example of sequencing-by-synthesis, where DNA molecules are clonally amplified on a template, and this template then goes under sequencing [25]. This approach relies on the detection of DNA polymerase activity by enzymatic luminometric inorganic pyrophosphate (PPi) that is released during DNA synthesis and goes under detection assay and offers the advantage of real-time detection [19]. Ronaghi used Nyren [21] description of an enzymatic system consisting of DNA polymerase, ATP sulphurylase and lucifinerase to couple the release of PPi obtained when a nucleotide is incorporated by the polymerase with light emission that can be easily detected by a luminometer or photodiode [20]. When PPi is released, it is immediately converted to adenosine triphosphate (ATP) by ATP sulphurylase, and the level of generated ATP is sensed by luciferase-producing photons [19][20][21]. The unused ATP and deoxynucleotide are degraded by the enzyme apyrase. The presence or absence of PPi, and therefore the incorporation or nonincorporation of each nucleotide added, is ultimately assessed on the basis of whether or not the photons are detected. There is minimal time lapse between these events, and the conditions of the reaction are such that iterative addition of the nucleotides and PPi detection are possible. The release of PPi via the nucleotide incorporation, it is detected by ELIDA (Enzymatic Luminometric Inorganic pyrophosphate Detection Assay) [19][21]. It is within the ELIDA, the PPi is converted to ATP, with the help of ATP sulfurylase and the ATP reacts with the luciferin to generate the light at more than 6 x 109 photons at a wavelength of 560 nm which can be detected by a photodiode, photomultiplier tube, or charge-coupled device (CCD) camera [19][20]. As mentioned before, the DNA molecules need to be amplified by polymerase chain reaction (PCR which is discussed later Ronaghi observed that dATP interfered with the detection system [19]. This interference is a major problem when the method is used to detect a single-base incorporation event. This problem was rectified by replacing the dATP with dATPaS (deoxyadenosine a–thiotrisulphate). It is noticed that adding a small amount of the dATP (0.1 nmol) induces an instantaneous increase in the light emission followed by a slow decrease until it reached a steady-state level (as Figure 11 shows). This makes it impossible to start a sequencing reaction by adding dATP; the reaction must instead be started by addition of DNA polymerase. The signal-to-noise ratio also became higher for dATP compared to the other nucleotides. On the other hand, addition of 8 nmol dATPaS (80-fold higher than the amount of dATP) had only a minor effect on luciferase (as Figure 14 shows). However dATPaS is less than 0.05% as effective as dATP as a substrate for luciferase [19]. Pyrosequencing is adapted by 454 Life Sciences for sequencing by synthesis [22] and is known as the Genome Sequencer (GS) FLX [23][24]. The 454 system consist of random ssDNA (single-stranded) fragments, and each random fragment is bound to the bead under conditions that allow only one fragment to a bead [22]. Once the fragment is attached to the bead, clonal amplification occurs via emulsion. The emulsified beads are purified and placed in microfabricated picolitre wells and then goes under pyrosequencing. A lens array in the detection of the instrument focuses luminescene from each well onto the chip of a CCD camera. The CCD camera images the plate every second in order to detect progression of the pyrosequencing [20][22]. The pyrosequencing machine generates raw data in real time in form of bioluminescence generated from the reactions, and data is presented on a pyrogram [20] Sequencing by Hybridisation As discussed earlier with chain-termination, Maxamm and Gilbert and pyrosequencing, these are all direct methods of sequencing DNA, where each base position is determined individually [26]. There are also indirect methods of sequencing DNA in which the DNA sequence is assembled based on experimental determination of oligonucleotide content of the chain. One promising method of indirect DNA sequencing is called Sequencing by Hybridisation in which sets of oligonucleotide probes are hybridised under conditions that allow the detection of complementary sequences in the target nucleic acid [26]. Sequencing by Hybridisation (SBH) was proposed by Drmanac et al in 1987 [27] and is based on Dotys observation that when DNA is heated in solution, the double-strand melts to form single stranded chains, which then re-nature spontaneously when the solution is cooled [28]. This results the possibility of one piece of DNA recognize another. And hence lead to Drmanac proposal of oligonucleotides pro bes being hybridised under these conditions allowing the complementary sequence in the DNA target to be detected [26][27]. In SBH, an oligonucleotide probe (n-mer probe where n is the length of the probe) is a substring of a DNA sample. This process is similar to doing a keyword search in a page full of text [29]. The set of positively expressed probes is known as the spectrum of DNA sample. For example, the single strand DNA 5GGTCTCG 3 will be sequenced using 4-mer probes and 5 probes will hybridise onto the sequence successfully. The remaining probes will form hybrids with a mismatch at the end base and will be denatured during selective washing. The five probes that are of good match at the end base will result in fully matched hybrids, which will be retained and detected. Each positively expressed serves as a platform to decipher the next base as is seen in Figure 16. For the probes that have successfully hybridised onto the sequence need to be detected. This is achieved by labelling the probes with dyes such as Cyanine3 (Cy3) and Cyanine5 (Cy5) so that the degree of hybridisation can be detected by imaging devices [29]. SBH methods are ideally suited to microarray technology due to their inherent potential for parallel sample processing [29]. An important advantage of using of using a DNA array rather than a multiple probe array is that all the resulting probe-DNA hybrids in any single probe hybridisation are of identical sequence [29]. One of main type of DNA hybridisation array formats is oligonucleotide array which is currently patented by Affymetrix [30]. The commercial uses of this shall be discussed under application of the DNA Array (Affymetrix). Due to the small size of the hybridisation array and the small amount of the target present, it is a challenge to acquire the signals from a DNA Array [29]. These signals must first be amplified b efore they can be detected by the imaging devices. Signals can be boosted by the two means; namely target amplification and signal amplification. In target amplification such as PCR, the amount of target is increased to enhance signal strength while in signal amplification; the amount of signal per unit is increased. Nanopore Sequencing Nanopore sequencing was proposed in 1996 by Branton et al, and shows that individual polynucleotide molecules can be characterised using a membrane channel [31]. Nanopore sequencing is an example of single-molecule sequencing, in which the concept of sequencing-by-synthesis is followed, but without the prior amplification step [24]. This is achieved by the measurement of ionic conductance of a nucleotide passing through a single ion channels in biological membranes or planar lipid bilayer. The measurement of ionic conductance is routine neurobiology and biophysics [31], as well as pharmacology (Ca+ and K+ channel)[32] and biochemistry[9]. Most channels undergo voltage-dependant or ligand dependant gating, there are several large ion channels (i.e. Staphylococcus aureus a-hemolysin) which can remain open extended periods, thereby allowing continuous ionic current to flow across a lipid bilayer [31]. If a transmembrane voltage applied across an open channel of appropriate size should d raw DNA molecules through the channel as extended linear chains whose presence would detect reduce ionic flow. It was assumed, that the reduction in the ionic flow would lead to single channel recordings to characterise the length and hence lead to other characteristics of the polynucleotide. In the proposal by Branton, a-hemolysin was used to form a single channel across a lipid bilayer separating two buffer-filled compartment [31]. a-Hemolysin is a monomeric, 33kD, 293 residue protein that is secreted by the human pathogen Staphylococcus aureus [33]. The nanopore are produced when a-hemolysin subsunits are introduced into a buffered solution that separates lipid bilayer into two compartments (known as cis and trans): the head of t

Friday, September 20, 2019

The Battle Of Iwo Jima

The Battle Of Iwo Jima During World War II on February 19, 1945, the United States of America and the Empire of Japan fought for Iwo Jima, a small island approximately 660 miles away from Japan. Codenamed Operation Detachment by the United States, the battle lasted for 35 days, ending on March 26, 1945, and it remains the largest battle in Marine Corps history, with some 75,144 men being deployed to fight (Frank). The battle of Iwo Jima also marked the first time that American casualties were higher than Japanese casualties in an amphibious assault. American casualties reached 24,733 while Japanese casualties were a little over 21,570 (Frank Naval History). This number was due to the leadership of the Japanese during the battle. The general who was in command of the Japanese forces at Iwo Jima was Lieutenant General Tadamichi Kuribayashi. During the battle for Iwo Jima, Lieutenant General Kuribayashi would show that he was one of Japans finest Generals. In preparation for the upcoming battle, Lieutenant General Kuribayashi chose to focus his defense on the Northern two-thirds of Iwo Jima, instead on the beaches where the United States would land troops (Frank). Kuribayashi knew that Japan would not beat the United States, simply because of the amount of soldiers the United States would send. Knowing this, Kuribayashi decided to not focus his efforts on the southern beaches and lose quickly to a superior American force, but instead Kuribayashi decided to create strong defensive positions on the rest of the island to increase the amount of American casualties. It was Kuribayashis belief that if his forces could inflict enough American casualties, the United States would not be compelled to invade Japan, fearin g that they would lose too many soldiers. In the Pacific Campaign of World War II, the United States used a strategy called island hopping, where the United States would attack a Japanese controlled island, capture it, and then repeat the process until they got to Japan. This was the United States strategy to defeat Japan, and the island of Iwo Jima was the next island to be captured. Iwo Jima was also strategically important because of the airfields located on it (Burrell). Iwo Jima was close enough to Japan where the United States could use the airfields on Iwo Jima to attack Japan through the air with B-29 bombers. This was the main reason why Japan defended the island so heavily. While the island of Iwo Jima was defensively important to the defense of mainland Japan, it was of little offensive importance because by this time Japans strategy was strictly based of the defense of mainland Japan. One Japanese officer described Iwo Jimas offensive relevance as such, Our first line Army and Naval air forces had been exhausted in the recent Philippines Operation. The anticipation to restore our air forces, bringing their combined number to 3,000 planes, could materialize only by March or April and even then, mainly because the types of airplanes and their performance proved to be impracticable for operations extending beyond 550 miles radius, we could not use them for operations in the Bonin Islands area (Burrell). Before the actually land invasion began, the United States bombed the southern part of Iwo Jima, three days before where they would land their troops. This is where American intelligence significantly failed in two ways. It underestimated Kuribayashis forces by at least a third, and completely missed Kuribayashis intent to make his last stand at the north end of the island, instead of facing the Americans head on at the south end. These errors ended up causing the misdirection of the three day bombardment, the heaviest of the war, to the southern landing beaches, instead of focusing on the northern side of the island, where the majority of Kuribayashis forces would be. When the land invasion did begin, Americans forces were met with no resistance by the Japanese. Instead of attacking the landing forces head on, the Japanese waited for the Americans to advance onto the beach, than ambushed them as they closed in towards the Japanese position. Not only did the ambush cause a great number of initial American casualties, it was difficult for the marines to fight back due to the terrain of the beach. Instead of the beach being made out of sand, it was full of volcanic ash, which made it hard for the landing forces to dig into the ground and defend themselves. One marine described it as, trying to fight in a bin of loose wheat (Frank). American forces were able to eventually break the Japanese line, and on February 23, 1945, the southern end of Iwo Jima was captured by American forces. As the United States pushed forward, they were met with heavy resistance from the Japanese who were well fortified and prepared to face the enemy. The more up north the United States went, the harder it became for them to fight. The Japanese had dug many bunkers into the terrain, and were successful at using ambush tactics against the marines which only made their advance more difficult. As the battle continued, marines started better adapting to fighting the Japanese on rough terrain, and with their superior forces drove the Japanese back until they could retreat no more. Marines fought for a long and tiring 35 days until on March 26, 1945, the island was officially said to be secure by American forces. In addition to being a historic battle in World War II, the battle of Iwo Jima has also had a significant effect on American culture. You can see traces of the battle in many art forms and popular media in America. The Raising of the Flag on Iwo Jima, a picture taken by American photographer Joe Rosenthal, depicts five marines and navy corpsman raising the flag on Mount Suribachi, at the southern end of Iwo Jima, on February 23, 1945. The photograph became a symbol for American patriotism during World War II, and the picture was even commemorated by being put on a postage stamp. You can also see the battle depicted in a movie directed by Clint Eastwood called Letters from Iwo Jima. In the movie Clint Eastwood shows the battle of Iwo Jima from the Japanese side, depicting what Japanese soldiers experienced as the battle was fought. The movie won an Academy Award for best sound editing, and was nominated for three more for its depiction of the historic battle. In conclusion, the battle of Iwo Jima was one of the most important battles in the Pacific front World War II. With the United States successfully able to capture the island of Iwo Jima, they acquired the airfields on the island. With these airfields now under United States control, B-29 bombers would now be able to use the island to launch aerial assaults towards Japan, and would be able to use it as a fueling station closer to Japan. The battle also showed the United States how far the Japanese were willing to go to defend their homeland. Out of the initial more than 20,000 force, only 1,083 Japanese soldiers were captured alive (Frank). This showed the United States that Japanese soldiers were willing to fight to the death to defend their home, and that if the United States was planning on invading Japan, the amount of casualties would have been catastrophic.