Intro to Statistics Essay

Intro to Statistics Essay.

Statistics is the branch of mathematics used to collect, analyze, interpret, and present data. It is used by business owners to make calculated decisions regarding the future of their companies. Types of Statistics There are two types of statistics. Descriptive statistics deal with describing a set of data elements graphically. This type of statistic does not make any sort of prediction, but rather shows bullet point type data. An example of descriptive statistics would be a plot line graph that reflects the United States population by year for the last ten years.

A person looking at this data would easily be able to determine how the population has increased or decreased in the past decade. The data would be insufficient however to determine what the population will be ten years from now. The second type of statistics is Inferential statistics. Inferential statistics uses a sampling of information to infer a future outcome. This is often referred to as the ‘best guess’ method of statistics.

This type of information is what is used by businesses in particular to make educated decisions for future planning.

Levels of Statistics There are also many levels of statistics. Nominal level statistics portray objects by name or by a label. Ordinal level statistics has ordered data such as by number or by letter. Interval level statistics has data ordered by the differences or intervals between the data. An example would be a thermometer labeled in degrees Celsius. These are all useful ways to organize data, but the more reliable and widely used level is Ratio level statistics.

In ratio statistics there is a natural zero starting point. This one fact gives the intervals between data actual meaning. A person can actually compare measurements using the ratio method. Business Decisions. All business owners want to succeed in their chosen field. The most logical way for a company to obtain longevity is to remain educated about what their business will face in the future. Statistics are used in this capacity. A company will prosper if it is able to use statistical data to make sound business decisions.

For example, a manager of a warehouse is aware that from November through January they will need to add 5 employees to their staff in order to meet the demands of their customers. This manager knows this because it is a trend that is reflected by statistics published by the sales department. The manager knows that in order to be successful throughout the season of higher demand there needs to be an allowance in the budget for salaries for these 5 extra workers. The statistics presented to management allows for proper planning.

Some Common Problems. In any business there are many factors to consider when an owner is looking at the possibility of expanding. Statistics based on a certain demographical area will help to choose an affluent location where the business will prosper. Another situation statistics play a major role in is financing. In order for a new business to obtain funding a business plan is needed. Statistics are presented by the aspiring business owner in the business plan to show the need for what he or she has to offer.

These statistics are also used by the financing entity to determine the risk factor within an industry. Things of this nature can greatly affect the outcome of a business loan. A more foreboding problem some businesses face is whether or not to lay off employees. Statistics are used in this case to determine how many people would need to be laid off from jobs to keep the company afloat. This is a harsh problem for a company to face, but it is a situation many businesses have found themselves dealing with in recent years.

This is an even more important reason to have reliable and trustworthy statistics available to aide in the decision making process. The overall idea of statistics used in business is to make beneficial and financially sound decisions. The more facts that are involved in any decision making process the better. In business the owner of a company has a responsibility to act intelligently when making decisions that will affect multiple employees and their families. Statistics are the main tools needed for a successful business structure.

Intro to Statistics Essay

What Are Descriptive Statistics Essay

What Are Descriptive Statistics Essay.

INTRODUCTION

Statistical procedures can be divided into two major categories: descriptive statistics and inferential statistics. Typically, in most research conducted on groups of people, you will use both descriptive and inferential statistics to analyse your results and draw conclusions. So what are descriptive and inferential statistics? And what are their differences?We have seen that descriptive statistics provide information about our immediate group of data. For example, we could calculate the mean and standard deviation of the exam marks for the 100 students and this could provide valuable information about this group of 100 students.

Any group of data like this, which includes all the data you are interested in, is called a population. A population can be small or large, as long as it includes all the data you are interested in.

For example, if you were only interested in the exam marks of 100 students, the 100 students would represent your population. Descriptive statistics are applied to populations, and the properties of populations, like the mean or standard deviation, are called parameters as they represent the whole population (i.

e., everybody you are interested in).Often, however, you do not have access to the whole population you are interested in investigating, but only a limited number of data instead. For example, you might be interested in the exam marks of all students in the UK. It is not feasible to measure all exam marks of all students in the whole of the UK so you have to measure a smaller sample of students (e.g., 100 students), which are used to represent the larger population of all UK students.

Properties of samples, such as the mean or standard deviation, are not called parameters, but statistics. Inferential statistics are techniques that allow us to use these samples to make generalizations about the populations from which the samples were drawn. It is, therefore, important that the sample accurately represents the population. The process of achieving this is called sampling (sampling strategies are discussed in detail here on our sister site). Inferential statistics arise out of the fact that sampling naturally incurs sampling error and thus a sample is not expected to perfectly represent the population. The methods of inferential statistics are (1) the estimation of parameter(s) and (2) testing of statistical hypotheses.

WHAT IS DESCRIPTIVE STATISTICS?

Descriptive statistics includes statistical procedures that we use to describe the population we are studying. The data could be collected from either a sample or a population, but the results help us organize and describe data. Descriptive statistics can only be used to describe the group that is being studied. That is, the results cannot be generalized to any larger group. Descriptive statistics are useful and serviceable if you do not need to extend your results to any larger group. However, much of social sciences tend to include studies that give us “universal” truths about segments of the population, such as all parents, all women, all victims, etc.

Frequency distributions, measures of central tendency (mean, median, and mode), and graphs like pie charts and bar charts that describe the data are all examples of descriptive statistics. Descriptive statistics is the term given to the analysis of data that helps describe, show or summarize data in a meaningful way such that, for example, patterns might emerge from the data. Descriptive statistics do not, however, allow us to make conclusions beyond the data we have analysed or reach conclusions regarding any hypotheses we might have made. They are simply a way to describe our data.

WHAT IS INFERENTIAL STATISTICS?

Inferential statistics is concerned with making predictions or inferences about a population from observations and analyses of a sample. That is, we can take the results of an analysis using a sample and can generalize it to the larger population that the sample represents. In order to do this, however, it is imperative that the sample is representative of the group to which it is being generalized. To address this issue of generalization, we have tests of significance.

A Chi-square or T-test, for example, can tell us the probability that the results of our analysis on the sample are representative of the population that the sample represents. In other words, these tests of significance tell us the probability that the results of the analysis could have occurred by chance when there is no relationship at all between the variables we studied in the population we studied. Examples of inferential statistics include linear regression analyses, ANOVA, to name a few.

WHAT IS THE DIFFERENCE BETWEEN DESCRIPTIVE AND INFERENTIAL STATISTICS?

Both descriptive and inferential statistics look at a sample from some population. The difference between descriptive and inferential statistics is in what they do with that sample: * Descriptive statistics aims to summarize the sample using statistical measures, such as average, median, standard deviation etc. For example, if we look at a basketball team’s game scores over a year, we can calculate the average score, variance etc. and get a description (a statistical profile) for that team. * Inferential statistics aims to draw conclusions about the population from the sample at hand. For example, it may try to infer the success rate of a drug in treating high temperature, by taking a sample of patients, giving them the drug, and estimating the rate of effectiveness in the population using the rate of effectiveness in the sample.

* Descriptive statistics are limited in so much that they only allow you to make summations about the people or objects that you have actually measured. You cannot use the data you have collected to generalize to other people or objects (i.e., using data from a sample to infer the properties/parameters of a population). For example, if you tested a drug to beat cancer and it worked in your patients, you cannot claim that it would work in other cancer patients only relying on descriptive statistics (but inferential statistics would give you this opportunity).

What Are Descriptive Statistics Essay

Sports Statistician Essay

Sports Statistician Essay.

A little over a year ago, when I was a senior in high school was when I first took my first statistics class. At first, the class seemed boring, due to the fact that all we did at the beginning of the year was memorize key terms. However, later on throughout the year, I have learned to love the subject. It is my favorite branch of mathematics, and that is why I am currently majoring in Math and Science Statistics at Kean University.

I wish to become a mathematics high school teacher, specifically teach Statistics.

Since being a teacher only requires you to work about 8 hours a day, plus you get to have about 2 months off, my second career would have to be a sports A sports statistician is a person who basically collects data and analyzes it carefully. These people keep track of everything that goes on in a game. For example, some of the works done by statisticians are the baseball box scores.

The number of punches a fighter throws at an opponent and the number that have landed are also examples of sports statistician. When you need to know the progress of a particular game from your favorite team, a Sports statistician is there to give you the most accurate data collected.

In a sporting event, attendance matters. Professional sports franchise hire statisticians to add up the attendance of their games. These statisticians also find out the total amount of people who attended games fort team through the year. They also find out the probability on how many tickets will be sold in the following year. As you can see, franchises deploy statisticians so that they can figure out the strengths and weaknesses of their teams. Announcing a team’s attendance record is important to franchises as well since it displays the strength, or lack thereof, of their fan base and stream of movies from.

The Salary range for a statistician varies. According to the U. S Bureau of Labor Statistics, the average salary for a statistician in the year 2010 was $36. 57 per hour, which is about $76,070 per year. If you do the math, statisticians in the top 10th percentile of earners make an average of $56. 35 per hour, or about $117,210 per year. While the other 90th percentile, or low 10th earn $18. 48 per hour or $ 38,430 per year. However, they may range from $25,000 to $95. 000 plus depending on a number of factors. Some of the variables include the type of organization the individual works for, its prestige, size, and location.

Other variables are the skills, experience level, and type of duties. Those who are responsible for just gathering the raw data for statistics are paid less than those who do the computer programming and analysis. In order to become a sports statistician, one must first attend a post-secondary institution. Majority of statisticians have a master’s degree in statistics or mathematics. It is essential for future statisticians to take computer science course in school because they must use statistical software to record data. Continuing education courses are also important for sports statisticians because technology is always advancing.

School knowledge is not the only thing statisticians must be aware of. They need to also have background knowledge on sports. They need to have strong knowledge and understanding of the sports they will cover. If they are serving as official scorers, they must know the rule books very well, so they can accurately score plays. A way I can gain experience is by scoring games for my college teams. There are a couple of downfalls when it comes to being a sports statistician. For the most part, when you first that working, you work part-time. They are often required to put in irregular hours because they must attend sporting events to record data.

This usually includes nights and weekends. Work as a sports statistician may lead to eyestrain, back pain or carpal tunnel Overall, my goal is to become a sports statistician in the future. I absolutely love mathematics, especially statistics. In my perspective, a career shouldn’t be a job, it should be a passion, and something you will enjoy doing for the rest of your life. Being a sports statistician will allow me to work with sports and professional teams. When I was a little girl, my dream was always to become a professional soccer player. Since the likelihood of that happening is one in a million, this career will be close enough.

Sports Statistician Essay

Electrical Submersible Pump Survival Analysis Essay

Electrical Submersible Pump Survival Analysis Essay.

Petroleum Engineer, Chevron Corp. & Masters Degree Candidate Advisor Dr. Jianhua Huang With help from PHD Candidate Sophia Chen Department of Statistics, Texas A&M, College Station MARCH 2011 ABSTRACT A common metric in Petroleum Engineering is “Mean Time Between Failures” or “Average Run Life”. It is used to characterize wells and artificial lift types, as a metric to compare production conditions, as well as a measure of the performance of a given surveillance & monitoring program.

Although survival curve analysis has been in existence for many years, the more rigorous analyses are relatively new in the area of Petroleum Engineering.

This paper describes the basic theory behind survival analysis and the application of those techniques to the particular problem of Electrical Submersible Pump (ESP) Run Life. In addition to the general application of these techniques to an ESP data set, this paper also attempts to answer: Is there a significant difference between the survival curves of an ESP system with and without emulsion present in the well?

Although survival curve analysis has been in existence for many years, the more rigorous analyses are relatively new in the area of Petroleum Engineering. As an example of the growth of these analysis techniques in the petroleum industry, Electrical Submersible Pump (ESP) survival analysis has been sparsely documented in technical journals for the last 20 years: ? ? ? First papers on the fitting of Weibull & Exponential curves to ESP run life data in 1990 (Upchurch) & 1993 (Patterson) Papers discussing the inclusion of censored data in 1996 (Brookbank) & 1999 (Sawaryn) Paper discussing the use of Cox Regression in 2005 (Bailey)

Unfortunately, the papers applying these techniques did little to transfer the knowledge to the practicing Petroleum Engineers. They shared the technical concepts and equations, but not the practical knowledge of how to apply them to real life problems or why these analyses improved upon the “take the average of the run life of failed wells” technique most commonly used. THEORY OF SURVIVAL ANALYSIS Survival analysis models the time it takes for events to occur and focuses on the distribution of the survival times.

It can be used in many fields of study where survival time can indicate anything from time to death (medical studies) to time to equipment failure (reliability metrics). This paper will present three methodologies for estimating survival distributions as well as a technique for modeling the relationship between the survival distribution and one or more predictor variables (both covariates and factors). Appendix A has a list of important definitions relevant to survival analysis. KAPLAN MEIER (NON-PARAMETRIC) Non-parametric survival analysis characterizes survival functions without assuming an underlying distribution.

The analysis is limited to reliability estimates for the failure times included in the data set (not prediction outside the range of data values) and comparison of survival curves one factor at a time (not multiple explanatory variables). A common non-parametric analysis is Kaplan Meier (KM). KM is characterized by a decreasing step function with jumps at the observed event times. The size of the jump depends on the number of events at that time t and the number of survivors prior to time t. The KM estimator provides the ability to estimate survival functions for right censored data. ti is the time at which a “death” occurs. i is the number of deaths that occur at time ti. When there is no censoring, ni is the number of survivors just prior to time ti. With censoring, ni is the number of survivors minus the number of censored units. The resulting curve, as noted, is a decreasing step function with jumps at the times of “death” ti. The MTBF is the area under the resulting curve; the P50 (median) time to failure is (t) 0. 5. Upper and lower confidence intervals can be calculated for the KM curve using statistical software. A back-of-the-envelope calculation for the confidence interval is the KM estimator +/2 standard deviations.

Greenwood’s formula can be used to estimate the variance for nonparametric data (Cran. R-project): Figure 1: Example Kaplan Meier survival curve showing estimate, 95% confidence interval, and censored data points When comparing two survival curves differing by a factor, a visual inspection of the null hypothesis Ho: survival curves are equal, can be conducted by plotting two survival curves and their confidence intervals. If the confidence intervals do not overlap, there is significant evidence that the survival curves are different (with alpha < 0. 05%) COX PROPORTIONAL HAZARD (SEMI-PARAMETRIC)

Semi-Parametric analysis enables more insight than the Non-Parametric method. It can estimate the survival curve from a set of data as well as account for right censoring, but it also conducts regression based on multiple factors/covariates as well a judge the contribution of a given factor/covariate to a survival curve. CPH is not as efficient as a parametric model (Weibull or Exponential), but the proportional hazards assumption is less restrictive than the parametric assumptions (Fox). Instead of assuming a distribution, the proportional hazards model assumes that the failure rate (hazard rate) of a unit is the product of: ? a baseline failure rate (which doesn’t need to be specified and is only a function of time) and a positive function which incorporates the effects of factors & covariates xi1 – xik (independent of time) This model is called semi-parametric because while the baseline hazard can take any form, the covariates enter the model linearly. Given two observations i & i’ with the same baseline failure rate function, but that differ in their x values (ie two wells with different operating parameters xk), the hazard ratio for these two observations are independent of time:

The above ratio is why the Cox model is a proportional-hazards model; even though the baseline failure rate h0(t) is unspecified, the ? parameters in the Cox model can still be estimated by the method of partial likelihood. After fitting the Cox model, it is possible to get an estimate of the baseline failure rate and survival function (Fox). A result of the regression is an estimate for the various ? coefficients and an R-square value describing the amount of variability explained in the hazard function by fitting this model. Relative contributions of factors/covariates can be interpreted as: ? ? ? >0, covariate decreases the survival time as value increases, by factor of exp(? ) ? 0 scale; k>0 shape ?(ln(2))1/k The Weibull shape parameter, k, is also known as the Weibull slope. Values of k less than 1 indicate that the failure rate is decreasing with time (infant failures). Values of k equal to 1 indicate a failure rate that does not vary over time (random failures). Values of k greater than 1 indicate that the failure rate is increasing with time (mechanical wear out) (Weibull). A change in the scale parameter, ? , has the same effect on the distribution as a change of the X axis scale.

Increasing the value of the scale parameter, while holding the shape parameter constant, has the effect of stretching out the PDF and survival curve (Weibull). Figure 2: Example Weibull curves with varying shape & scale parameters The Weibull regression model is the same as the Cox regression model with the Weibull distribution as the baseline hazard. The proportional hazards assumption used by the CPH method, when applied to a survival curve with a Weibull function baseline hazard, only holds if two survival curves vary by a difference in the scale parameter (? ) not by a difference in the shape parameter (k).

If goodness of fit to the Weibull distribution can be achieved, a confidence interval can be calculated for the curve, the median value and its confidence interval can be calculated, and a comparison of the differences in two survival curves can be conducted. Goodness of fit can be tested in R using an Anderson Darling calculation and verified with a Weibull probability plot. Poor fit in the tails of the Weibull distribution is a common occurrence for reliability data due to infant mortality and longer than expected wear out time. STEPWISE COX & W EIBULL REGRESSION

Given a large number of explanatory variables and the larger number of potential interactions, not all of those variables may be necessary to develop a model that characterizes the survival curve. One way of determining a model is by using Stepwise model selection through minimization of AIC (Akaike Information Criterion). This model selection technique allows variables to enter/exit the model using their impact on the AIC calculated at that step. AIC is an improvement over maximizing the R-Square in that it’s a criterion that rewards goodness of fit while penalizing for model complexity.

APPLICATION TO AN ESP DATA SET As stated previously, these survival analysis techniques can be applied to many types of data in many industries ranging from survival data for people in a medical study to survival data for equipment in a reliability study. These methodologies have many uses in the petroleum industry; from surface equipment system and component reliability used by facility and reliability engineers, to well and downhole system and component reliability used by petroleum and production engineers.

As an example, this paper illustrates the use of these techniques on the run life of Electrical Submersible Pumps (ESP). ESPs are a type of artificial lift for bringing produced liquids to the surface from within a wellbore. Appendix B includes a diagram of an ESP. For this paper, the run life will refer to the run life of an ESP system, not the individual components within the ESP system. While this paper focuses on ESP systems, these same techniques could be applied to other areas of Petroleum Engineer interests including run life of individual ESP components, other types of artificial lift, entire well systems, etc.

DATA DESCRIPTION ESP-RIFTS JIP (Electrical Submersible Pump Reliability Information and Failure Tracking System Joint Industry Project) is a group of 14 international oilfield operators who have joined efforts to gain a better understanding of circumstances that lead to a success or failure in a specific ESP application. The JIP includes access to a data set of 566 oil fields, 27861 wells, 89232 ESP installations, and 182 explanatory factors/covariates related to either the description of the ESP application or the description of the ESP failure.

For the analysis described in this paper, a subset of the data has been used, restricted to: ? ? ? ? ? ? Observations related to Chevron operated fields observations with no conflicting information (as defined by the JIP’s data validation techniques) factors that were related to the description of the ESP application (excluded 27) factors not confounded with or multiples of other factors (excluded 30) factors with a large number (>90%) of non-missing data points (excluded 78) factors that were not free-form comment fields (excluded 27)

Appendix C has a list of the original 182 variables with comments on why they were removed from the analyzed data set, below is a table of the 20 remaining explanatory variables included in this analysis. SUMMARY TABLE OF DATA INCLUDED IN THE CPH/REGRESSION ANALYSIS: OBSERVATIONS: 1588 DESCRIPTION RunLife Censor Country Offshore Oil Water Gas Scale CO2 Emulsion CtrlPanelType NoPumpHouse PumpVendor NoPumpStage NoSealHouse

NoMotorHouse MotorPowerRating NoIntakes NoCableSys CableSize DHMonitorInstalled DeployMethod COVARIATE/FACTOR & # OF LEVELS Response Censor Flag (0, 1) Factor (7 levels) Factor (2 levels) Covariate Covariate Covariate Factor (5 levels) Covariate Factor (3 levels) Factor (2 levels) Covariate Factor (2levels) Covariate Covariate Covariate Covariate Covariate Covariate Covariate Factor (2 levels) Factor (2 levels) DESCRIPTION Time between date put on production and date stopped or censored 1 if ESP failure 0 if still running or stopped for a different reason Country & Field in which the ESP is operated Indication of whether the ESP was an onshore or offshore installation Estimated average surface oil rate (m3/day) Estimated average surface water rate (m3/day) Estimated average surface gas rate (1000m3/day) Qualitative level of scaling present in the well % of CO2 present in the well Qualitative level of emulsion present in the well Type of surface control panel used Number of pump housings Pump Vendor Number of pump stages Number of seal housings Number of motor housings Motor rated power at 60Hz (HP) Number of intakes Number of cable systems Size of cable Flag for installation of a downhole monitor Method of ESP deployment into the well FINDING THE P50 TIME TO FAILURE FOR A DATASET Example 1: Using the entire data set, what is the P50 estimate for the runtime of a Chevron ESP? The answers differ considerably for the 4 calculation types: METHODOLOGY Mean or Median Kaplan Meier Median CPH Median INCLUDES CENSORED?

No Yes Yes P50 ESTIMATE (DAYS) Mean: 563 Median: 439 1044 1043 ASSUMPTION None None None (as no comparison of levels/covariates, essentially same results as KM) Anderson Darling GOF for Weibull Distribution N/A N/A N/A ASSUMPTIONS MET ? Weibull Median Yes 1067 NO (rejected the null hypothesis of good fit, due to poor fit in the tails) In this example, the biggest impact on the difference between the methods is the inclusion of censored data. A large number of the ESPs in this data set have been running for >3000 days without a failure and were excluded in the often used calculation of the average run life of all failed ESPs. Given that the Weibull distribution did not pass the Anderson Darling goodness of fit test, the most appropriate calculation would have been the KM or CMH.

Appendix E has the output from the various methodologies. The interpretation of these results is that the P50 estimate of run life for an ESP installation in Chevron is ~ 1044 days. Additional, output from the KM analysis sets the 95% confidence interval at 952 to 1113 days. Figure 3: Comparison of estimation methods for full data survival curve. Note the deviation of the Weibull in the tails of the data. COMPARING TWO SURVIVAL CURVES DIFFERING BY A FACTOR Example 2: Using the 2 level factor emulsion, does the presence of emulsion in the well make a significant difference in the P50 run life of an ESP system? METHODOLOGY Mean or Median Kaplan Meier Median CPH Median INCLUDE CENSOR?

No Yes Yes EMULSION P50 (DAYS) Mean 600 Median 458 606 533 NO EMULSION P50 (DAYS) Mean 536 Median 424 1508 1408 SIGNIFICANT DIFFERENCE? Don’t know Yes (visual Inspection of CI) Yes, with a Likelihood ratio test and a pvalue of 0, reject that B’s are the same. Yes, with a z test statistic and a pvalue of 0, reject that the scale values are the same. INTERPRETATION Well performance is about the same Wells without emulsion perform much better Wells without emulsion survive longer. Exp(B) indicates 2. 5 times increased survival time for no emulsion. Wells without emulsion survive longer. Scale parameter value indicates 2. 75 times increased survival time for no emulsion. ASSUMPTIONS MET ? No. (Reject null hypothesis of prop. hazards with a p value of 0. 01. ) No. Reject null hypothesis of good of fit due to poor fit in the tails) Weibull Median Yes 531 1463 The more complex the methodology used, the more information is available to interpret the results. Again, the addition of censored data resulted in a very different interpretation of the data than just using the mean/median value of all failed ESPs; not just in the order of magnitude of the results, but also determination of which condition resulted in a longer run life. The results of both the CPH & Weibull methodologies are suspect due to their failure to meet the prerequisite assumptions. Looking at the plots, it is apparent that the fit is poor in the tails.

Appendix F has the output from the various methodologies The interpretation of these results is that wells without emulsion have > a 2x increase in their P50 run life than wells with emulsion. It should be noted that given the other factors that differ in the operation of these ESPs, this difference may not be fully attributed only to the difference in emulsion, but this interpretation should lead to further investigation. Figure 4: KM estimated survival curves for ESPs with and without emulsion with confidence interval Figure 5: Comparison of estimation methods (KM, CPH, Weibull) for ESPs with and without emulsion CHOOSING THE VARIABLES THAT CHARACTERIZE A SURVIVAL CURVE

Example 3: Of the variables collected by the JIP, which most describe the survival function? Do the variables collected in the dataset capture the variation in the survival function? As stated previously, both Weibull & Cox regression fit a model using explanatory variables. The introduction of Stepwise variable selection to that regression allows the preferential fitting of the model by minimizing the AIC. As Weibull regression is a special case of Cox regression with a Weibull baseline hazard function, and as Cox regression has less restrictive assumptions than parametric regression, this example will focus solely on Cox regression using Stepwise

Electrical Submersible Pump Survival Analysis Essay

Electoral College USA Politics 30 Marker Essay

Electoral College USA Politics 30 Marker Essay.

Evaluate the view that despite criticism’s, the Electoral College is by far the best method of electing the US President. (30) The Electoral College is an institution established by the Founding Fathers to elect the President indirectly. The Electoral College never meets, instead the presidential Electors – whose numbers equals the number of representatives and senators the state has in the United States Congress- meet to cast ballots for the President and Vice President.

Ever since the Electoral College was established the electioneering system has received major criticism for its over-representation of small states.

One could suggest more densely populated states such as California are at a disadvantage when one compares the voter population to Electoral College numbers. Compared to a small state such as Wyoming who receives an Electoral College vote for every 165,000 people, California’s ratio is somewhat marginally different, receiving an Electoral College for every 617,000 people.

Therefore if California were to receive votes on the same basis as Wyoming it would not have 55 Electoral College votes but 205.

So it may appear California is in some way representationally handicapped, at the expense of the Electoral College system.

Therefore how good of a method is the Electoral College if it goes against basic democratic principles by making the vote of one citizen worth more than the vote of another, depending on the population of the state in which they reside. Moreover another major criticism of the Electoral College is the Winner Takes All system.

This simply means a candidate can win the popular vote, but end up losing the election. This again challenges the democratic stance of American politics, as a candidate can be favoured amongst the majority of the population, yet lose the election because of the way in which states are represented within the Electoral College system. This undemocratic mishap has occurred in the past within the 1876, 1888 and 2000 which was arguably the most controversial.

Republican George W. Bush received 50,456,002 popular votes and won 271 electoral votes. His Democratic opponent, Al Gore, won the popular vote with 50,999,897 votes, but won only 266 electoral votes, and therefore lost the election. Voting events of this nature have led many voters and political commentators to believe that the Electoral College does not represent the electorate proportionately and efficiently, and therefore is an ineffective method of electing the U.S President.

Even though the Electoral College receives numerous criticisms for the faults that it possesses, it is favoured amongst smaller states who feel that the system provides them with a more influential voice within the voting system. Small populated states such as Wyoming and Delaware who both tally below 7 Electoral College votes, feel that if the Electoral College were to be abolished the votes of their inhabitants would become almost worthless, swept aside by the size of such states as California and Texas, who are predominantly more populated.

Another benefit of the Electoral College system is that it forces candidates to campaign in a state by state basis rather than focus on the more populated states. By doing so this makes sparsely populated states feel more involved and active within the political process, and therefore could be argued as a good method of electing the President as it promotes a democratic stance within small states.

Yet to suggest that the Electoral College system is the best method of electing a President would be completely open to opinion. Small states within America would argue that it is an effective system as it provides them with influence within the political system. Were as larger states would argue it is an improper form of electing the President as the viewpoints of voters within their states are not as democratically voiced in comparison to small states.

Personally I believe the best method of electing the President would be referring to the popular vote, as it is a clearer and more representational form of voting, in comparison to the Electoral College system.

Electoral College USA Politics 30 Marker Essay

Common Levels of Data Measurement Essay

Common Levels of Data Measurement Essay.

Four common levels of data measurement follow. •Nominal Level. The lowest level of data measurement is the nominal level.

Numbers representing nominal level data (the word level often is omitted) can be used only to classify or categorize. Employee identification numbers are an example of nominal data. The numbers are used only to differentiate employees and not to make a value statement about them. Many demographic questions in surveys result in data that are nominal because the questions are used for classification only.

Some other types of variables that often produce nominal-level data are sex, religion, ethnicity, geographic location, and place of birth. Social Security numbers, telephone numbers, employee ID numbers, and ZIP code numbers are further examples of nominal data. Statistical techniques that are appropriate for analyzing nominal data are limited. However, some of the more widely used statistics, such as the chi-square statistic, can be applied to nominal data, often producing useful information. Ordinal-level data measurement is higher than the nominal level. In addition to the nominal level capabilities, ordinal-level measurement can be used to rank or order objects. •Interval-level data measurement is the next to the highest level of data in which the distances between consecutive numbers have meaning and the data are always numerical. The distances represented by the differences between consecutive numbers are equal; that is, interval data have equal intervals.

An example of interval measurement is Fahrenheit temperature. With Fahrenheit temperature numbers, the temperatures can be ranked, and the amounts of heat between consecutive readings, such as 200, 210, and 220, are the same. In addition, with interval-level data, the zero point is a matter of convention or convenience and not a natural or fixed zero point. Zero is just another point on the scale and does not mean the absence of the phenomenon. For example, zero degrees Fahrenheit are not the lowest possible temperature.

Some other examples of interval level data are the percentage change in employment, the percentage return on a stock, and the dollar change in stock price. •Ratio-level data measurement is the highest level of data measurement. Ratio data have the same properties as interval data, but ratio data have an absolute zero, and the ratio of two numbers is meaningful. The notion of absolute zero means that zero is fixed, and the zero value in the data represents the absence of the characteristic being studied.

The value of zero cannot be arbitrarily assigned because it represents a fixed point. This definition enables the statistician to create ratios with the data. Examples of ratio data are height, weight, time, volume, and Kelvin temperature. With ratio data, a researcher can state that 180 pounds of weight is twice as much as 90 pounds or, in other words, make a ratio of 180:90. Many of the data gathered by machines in industry are ratio data. Reference link : http://classof1. com/homework-help/statistics-homework-help

Common Levels of Data Measurement Essay

Develop some hypotheses to explore, as well as listening to what the client says about himself

Develop some hypotheses to explore, as well as listening to what the client says about himself.

Hypotheses

Hector is a 35-year-old Hispanic male. Hector was seriously injured   in an auto accident 5 months ago. He spent 4 months in the hospital.   One leg and hip suffered permanent damage. He will probably always   need a cane and walk with a limp. He also has limited mobility in his   right arm and shoulder and cannot lift anything heavy.

Hector grew up in a family with a construction business. He has been   working in construction since high school. Social work at the rehab   hospital referred Hector to counseling because he is not adjusting   well to his injuries. He is very depressed and angry because he cannot   continue to work in his chosen occupation. Hector says, “What   use am I now? I am the man in the family. I should be strong. I need   to support the family. I don’t want my wife to work. I   don’t know what I am going to do to make a living.”

Hector still experiences severe pain in his shoulder and his leg. He   spends a lot of time watching TV and drinking. At this point, he is   not looking for work. His family has reached out to him but he avoids   interaction with his siblings and his parents because they remind him   of what he no longer can do. He feels shame when he is around family   members because of his disability. His relationship with his wife is   tense because he is often angry and withdrawn.

You are meeting with Hector for the first time and want to get the story.

As we meet with a new client we develop some hypotheses to   explore, as well as listening to what the client says about himself.

Write a 750-1,000-word paper addressing the following:

  1. What are some of primary issues that Hector is   addressing?
  2. What are some of Hector’s feelings?
  3. What are Hector’s behaviors and how are these impacting     his adjustment?
  4. How might Hector’s cultural heritage     be impacting his adjustment to his injury?
  5. What are some     beliefs that may be impacting Hector’s adjustment?
  6. How might Hector’s family background be impacting his   adjustment?
  7. What might be some issues that will need to be     addressed in counseling?
  8. What are the stages of change the     counselor will help Hector through? Provide a brief definition     of each stage.
  9. What steps can a counselor take to help     Hector explore his problems?
  10. Consider the following     counselor skills: eye contact, closed and open-ended questions,     paraphrasing and summarizing, reflection, body language, tone,     listening, empathy, genuineness, unconditional positive regard,     interpretation, tracking, reflecting emotion, active presence,     structuring sessions, encouragement, questioning and clarifying,     counselor self-awareness, tolerating intensity, validating,     challenging, multicultural awareness/acceptance, probing and     questioning, posture. Pick one skill and discuss how a counselor     should use this in their sessions with Hector.

Include at three scholarly references in your paper. At least two   resources should identify issues that need to be addressed when you   are working with someone who is being rehabilitated from a serious injury.

Prepare this assignment according to the guidelines found in the APA   Style Guide, located in the Student Success Center. An abstract is not required.

Develop some hypotheses to explore, as well as listening to what the client says about himself

Using SPSS and Microsoft Word, complete problems 1 through 10 on pages 367–370 in the Ross textbook. Show all work. Submit both your SPSS and Word files for grading.

Using SPSS and Microsoft Word, complete problems 1 through 10 on pages 367–370 in the Ross textbook. Show all work. Submit both your SPSS and Word files for grading..

SPSS and Microsoft Word

As a current or future health care administration leader, how will you decide which statistical test is most appropriate for the goal of monitoring, tracking, or overseeing operations in your health services organization?

Each statistical test is dependent on a given set of research parameters and expectations. As you have examined throughout this course, understanding why and when to use each statistical test is necessary in conducting the tests for process control and comparison. Thus, chi-square tests are useful for table data, ANOVA for a single quantitative dependent variable and categorical independent variables, and ANOM for analysis of mean differences. Regression modeling is useful for determining the influence of independent variables on a quantitative dependent variable.

For this Assignment, review the resources for this week regarding chi-square, ANOVA, ANOM, and regression. Pay particular attention to the examples shown in the textbook.

The Assignment: (4– pages)

  • Using SPSS and Microsoft Word, complete problems 1 through 10 on pages 367–370 in the Ross textbook. Show all work. Submit both your SPSS and Word files for grading.

Using SPSS and Microsoft Word, complete problems 1 through 10 on pages 367–370 in the Ross textbook. Show all work. Submit both your SPSS and Word files for grading.

why and when to use each statistical test is necessary in conducting the tests for process control and comparison

why and when to use each statistical test is necessary in conducting the tests for process control and comparison.

ANOVA

As a current or future health care administration leader, how will you decide which statistical test is most appropriate for the goal of monitoring, tracking, or overseeing operations in your health services organization?

Each statistical test is dependent on a given set of research parameters and expectations. As you have examined throughout this course, understanding why and when to use each statistical test is necessary in conducting the tests for process control and comparison. Thus, chi-square tests are useful for table data, ANOVA for a single quantitative dependent variable and categorical independent variables, and ANOM for analysis of mean differences. Regression modeling is useful for determining the influence of independent variables on a quantitative dependent variable.

For this Assignment, review the resources for this week regarding chi-square, ANOVA, ANOM, and regression. Pay particular attention to the examples shown in the textbook.

The Assignment: (4– pages)

  • Using SPSS and Microsoft Word, complete problems 1 through 10 on pages 367–370 in the Ross textbook. Show all work. Submit both your SPSS and Word files for grading.

Book to use

https://ereader.chegg.com/#/books/9781118603642/cfi/362!/4/4@0.00:0.00

Will send login details through the chat.thank you

why and when to use each statistical test is necessary in conducting the tests for process control and comparison

The Scientific Method:Discuss the significance of the scientific approach to the development and advancement of human knowledge.

The Scientific Method:Discuss the significance of the scientific approach to the development and advancement of human knowledge..

The Scientific Method

After considering the scientific method explained in the textbook, write an essay about how it compares to the way nonscientists approach problems. Identify some problems that are solvable scientifically and some that are not. Using one or two small problems, describe the process you would go through in solving that problem using the scientific method. Discuss the significance of the scientific approach to the development and advancement of human knowledge. Your essay should be about 300 words.

Biology consists of a great deal of knowledge. Much of that knowledge takes the form of facts that we refer to as theories. Or perhaps this is better understood by saying that biologists treat theories as though they were facts. But, they are special kind of fact. They are not a fact the way your social security number is a fact. A theory is a fact that has been derived using the scientific method.

The scientific method always starts with an observation. And notice carefully that we use the singular word, observation, and not the plural ‘observations’, even if a thousand events were observed. The observation leads to a question. Questions come in many shapes and forms, but the scientific method needs to pose only very specific questions. This is because the question must be able to be worded as a hypothesis. What is a hypothesis? A hypothesis is a specific statement in which a cause and effect scenario is central. For an example, follow along with the scenarios presented in the assigned textbook readings. You will see that a hypothesis can never be an open ended question. It must be specific. For example, this is a hypothesis: If I put a cover over a flame, it will go out. This is not a hypothesis: Why does the flame go out when I put a cover over it? After you have created a hypothesis, you design experiments to see if you can support your hypothesis. Keep in mind that in the biological sciences, while you can support a hypothesis, you can never prove one. This is one of the most misunderstood concepts in science. You will never account for every possible condition for a given hypothesis; therefore, you can never prove it beyond any shadow of doubt.

The Scientific Method:Discuss the significance of the scientific approach to the development and advancement of human knowledge.