gridview set the selected item checked/highlighted?

it’s still a questiong

Advertisements

Theme.Light android doesn’t work

“If you’re developing for API level 11 and higher, you should instead use Theme.Holo or Theme.DeviceDefault ”

We want to create a transparent activity in our app but we also want our app to use a Holo theme when run on HoneyComb +.  However, Theme.Translucent.NoTitleBar does NOT utilize the nice new Holo theme.

So your app would use Holo.Light, but your transparent activity would use the older default theme. This will look very unprofessional.

SetApplicationAttribute(android:theme, "@android:style/Theme.Holo.Light")

The Holo themes’ Transparent effect can be gained by setting your application theme to

@android:style/Theme.Holo.Light.Panel

But again, this has a problem. Run this on a device operating on an Android version less than HoneyComb (i.e. gingerbread) and you get an error as the theme simply doesn’t exist.

solution

\res\values\theme.xml
\res\values-v11\theme.xml

Anything added to the theme.xml in the values folder will be used as default. Anything added to the theme.xml in the values-v11 will be use when android v11 (honeycomb) or above is in use.

\res\values\theme.xml:

<?xml version="1.0" encoding="utf-8"?>

<resources>
    <style 
        name="MyThemeTransparent" parent="android:style/Theme.Translucent.NoTitleBar">
    </style>
</resources>

\res\values-v11\theme.xml:

<?xml version="1.0" encoding="utf-8"?>

<resources>
    <style 
        name="MyThemeTransparent" parent="@android:style/Theme.Holo.Light.Panel">
    </style>
</resources>

hsv wheel color picker

via http://chiralcode.com/color-picker-for-android/

Single color picker

Color Picker is based on HSV color model. Using color wheel in the middle, user can adjust hue and saturation. Arc on the right side allows user to change value of the selected color. Arc on the left side shows currently selected color. This is how it looks like:

Android Color Picker Android Color Picker Android Color Picker

Multi color picker

More advanced version allows to pick several colors at once. It is really not easy to compose a palette of eye-catching colors. Changing only the hue while keeping saturation and color value on the same level gives nice effects and is easy to achieve.

Android Color Picker Android Color Picker Android Color Picker

Project is available at GitHub: https://github.com/chiralcode/Android-Color-Picker/

 

UberColorPicker Demo

UberColorPicker Demo - screenshot thumbnail UberColorPicker Demo - screenshot thumbnail

Superdry Color Picker

Superdry Color Picker Demo - screenshotSuperdry Color Picker Demo - screenshot

https://github.com/superdry/

Color Spectrum

http://www.normankoren.com/light_color.html   Digital Image Basics http://en.wikipedia.org/wiki/RAL_colour_standard

 

The color spectrum is a 1D representation of the 3D color information in an image. The spectrum represents all the color information associated with that image or a region of the image in the HSL space.

HSL Color Space

If the input image is in RGB format, the image is first converted to HSL format and the color spectrum is computed from the HSL space. Using HSL images directly—those acquired with an image acquisition device with an onboard RGB to HSL conversion for color matching—improves the operation speed.

Colors represented in the HSL model space are easy for humans to quantify. The luminance—or intensity—component in the HSL space is separated from the color information. This feature leads to a more robust color representation independent of light intensity variation. However, the chromaticity—or hue and saturationplane cannot be used to represent the black and white colors that often comprise the background colors in many machine vision applications. Refer to the color pattern matching section for more information about color spaces.

Each element in the color spectrum array corresponds to a bin of colors in the HSL space. The last two elements of the array represent black and white colors, respectively.

how the HSL color space is divided into bins

The hue space is divided into a number of equal sectors, and each sector is further divided into two parts: one part representing high saturation values and another part representing low saturation values. Each of these parts corresponds to a color bin—an element in the color spectrum array.

The following figure illustrates the correspondence between the color spectrum elements and the bins in the color space.

A color spectrum with a larger number of bins, or elements, represents the color information in an image with more detail, such as a higher color resolution, than a spectrum with fewer bins.

The value of each element in the color spectrum indicates the percentage of image pixels in each color bin. When the number of bins is set according to the color sensitivity parameter, the machine vision software scans the image, counts the number of pixels that fall into each bin, and stores the ratio of the count and total number of pixels in the image in the appropriate element within the color spectrum array.

The color spectrum contains useful information about the color distribution in the image. You can analyze the color spectrum to get information such as the most dominant color in the image, which is the element with the highest value in the color spectrum. You also can use the array of the color spectrum to directly analyze the color distribution and for color matching applications.

If you lighten or darken color images you need to understand how color is represented. Unfortunately there are several models for representing color. The first two should be familiar; the latter two may be new.

  • It is not practical to use RGB or CMY(K) to adjust brightness or color saturation because each of the three color channels would have to be changed, and changing them by the same amount to adjust brightness would usually shift the color (hue).
  • HSV and HSL are practical for editing because the software only needs to change V, L, or S.

Image editing software typically transforms RGB data into one of these representations, performs the adjustment, then transforms the data back to RGB. You need to know which color model is used because the effects on saturation are very different.

HSV – It is not practical to use RGB or CMY(K) to adjust brightness or color saturation because each of the three color channels would have to be changed, and changing them by the same amount to adjust brightness would usually shift the color (hue). HSV and HSL are practical for editing because the software only needs to change V, L, or S. Image editing software typically transforms RGB data into one of these representations, performs the adjustment, then transforms the data back to RGB. You need to know which color model is used because the effects on saturation are very different.

HSL color. Maximum color saturation takes place at L = 0.5 (50%). L = 0 is pure black and L = 1 (100%) is pure white, regardless of H or S. The HSL color model can be depicted as a double cone, widest at the middle (L = 0.5), coming to points at the top (L = 1; pure white) and bottom (L = 0; pure black).

HSV and HSL were developed to represent colors in systems with limited dynamic range (pixel levels 0-255 for 24-bit color). The limitation forces a compromise.

  • HSV represents saturation much better than brightness: V = 1 can be a pure primary color or pure white; hence “Value” is a poor representation of brightness.
  • HSL represents brightness much better than saturation: L = 1 is always pure white, but when L > 0.5, colors with S = 1 contain white, hence aren’t completely saturated.
  • In both models, hue H is unchanged when L, V, or S are adjusted.

 

Darkening in HSV reduces saturation. Darkening in HSL increases saturation when L > 0.5.
  Lightening in HSV increases saturation. Lightening in HSL reduces saturation when L > 0.5.

HSV
Best representation of saturation
 
HSL
Best representation of lightness
V, L, and H illustrated for S = 1 (maximum saturation)
V, L, and S illustrated for H = 0.333 (120º; Green)

HSV: Best representation of saturation

HSL: Best representation of lightness

e the eye can barely distinguish about 200 different gray levels

RGB is the primary color

 

What’s Difference Between Z Score and T Score

They indicate how many SD an observation in a data is above or below the mean.

Most commonly used in a z-test, z-score is similar to T score for a population.

When you know the population standard deviation and population mean for a population, it is better to use Z test. When you do not have all this information and instead have sample data, it is prudent to go for T test.

In Z test, you compare a sample to a population. On the other hand, T test can be performed for a single sample, two distinct samples that are different and not related or for two or more samples that are matching.

When the sample is large (n greater than 30), Z- score is normally calculated but T-score is preferred when the sample is less than 30. This is because you do not get a good estimate of the standard deviation of the population with a small sample and this is why a T score is better.

Z Score vs T Score

• T scores and Z scores are measures that measure deviation from normal.

• In case of T scores, the average or normal is taken as 50 with a SD of 10. So a person scoring more or less than 50 is above or below average.

• The average for Z score is 0. To be considered above average, a person has to get more than 0 Z score.
Read more: http://www.differencebetween.com/difference-between-z-score-and-vs-t-score/#ixzz35YjV8RVc

czech visa

清单:
1护照(加复印件),英国的居留卡(原件退回,需要复印)
2三个月银行账单(原件检查,需要复印件)
3HOTEL证明
4飞机票证明
5旅行保险(需要把条例都打出来), 我买的是jet2.com 的5磅的 insurance 也可以。(比起邮局的便宜多了)
6会议主办方的邀请信。
7会议注册费收据(可选)
8雇主证明(或者在读证明)
9申请表(加复印件,和一张二寸照片)

这样就可以了,,,他们说需要1周时间左右。10-14 days

 

布拉格攻略

23/6 called 09065 540 727 to make an appointment, which is £1.02 per minute for BT line.

the earliest one is 9/7 9am.

14/7 Hand in the application

12/8 Flight

 

main effect & interaction

http://www.psychstat.missouristate.edu/multibook/mlt09.htmThe ANOVA Summary Table

The results of the analysis are presented in the ANOVA summary table, presented below for the example data.

The items of primary interest in this table are the effects listed under the “Source” column and the values under the “Sig.” column. As in the previous hypothesis test, if the value of “Sig” is less than the value of a as set by the experimenter, then that effect is significant. If a =.05, then the Ability main effect and the Ability BY Method interaction would be significant in this table.

Main effects are differences in means over levels of one factor collapsed over levels of the other factor. This is actually much easier than it sounds. For example, the main effect of Method is simply the difference between the means of final exam score for the two levels of Method, ignoring or collapsing over experience. As seen in the second method of presenting a table of means, the main effect of Method is whether the two marginal means associated with the Method factor are different. In the example case these means were 30.33 and 30.56 and the differences between these means was not statistically significant.

As can be seen from the summary table, the main effect of Ability is significant. This effect refers to the differences between the three marginal means associated with Ability. In this case the values for these means were 27.33, 33.83, and 30.17 and the differences between them may be attributed to a real effect.

Simple Main Effects

A simple main effect is a main effect of one factor at a given level of a second factor. In the example data it would be possible to talk about the simple main effect of Ability atMethod equal blue-book. That effect would be the difference between the three cell means at level a1 (26.67, 31.00, and 33.33). One could also talk about the simple main effect of Method at Ability equal lots (33.33 and 27.00). Simple main effects are not directly tested in this analysis. They are, however, necessary to understand an interaction.

Interaction Effects

An interaction effect is a change in the simple main effect of one variable over levels of the second. An A X B or A BY B interaction is a change in the simple main effect of B over levels of A or the change in the simple main effect of A over levels of B. In either case the cell means cannot be modeled simply by knowing the size of the main effects. An additional set of parameters must be used to explain the differences between the cell means. These parameters are collectively called an interaction.

The change in the simple main effect of one variable over levels of the other is most easily seen in the graph of the interaction. If the lines describing the simple main effects are not parallel, then a possibility of an interaction exists. As can be seen from the graph of the example data, the possibility of a significant interaction exists because the lines are not parallel. The presence of an interaction was confirmed by the significant interaction in the summary table. The following graph overlays the main effect of Ability on the graph of the interaction.

statistics

showing relationship: observational studies surveys showing

causation: controlled experiment survey is used to analyze the construct

Median is robust Mode – measure of the center   cut tail: lower 25% upper 25% Boxplots – IQR,max-min=range   how spread out of the chart small bin size to have as more deatails as possible

probability density function we’re never 100% sure

 

central limit theorem – the distribution of sample means is approximately normal.

 

95% confidence interval for the mean

A confidence interval gives an estimated range of values which is likely to include an unknown population parameter, the estimated range being calculated from a given set of sample data.

If independent samples are taken repeatedly from the same population, and a confidence interval calculated for each sample, then a certain percentage (confidence level) of the intervals will include the unknown population parameter. Confidence intervals are usually calculated so that this percentage is 95%, but we can produce 90%, 99%, 99.9% (or whatever) confidence intervals for the unknown parameter.

The width of the confidence interval gives us some idea about how uncertain we are about the unknown parameter (see precision). A very wide interval may indicate that more data should be collected before anything very definite can be said about the parameter.

Confidence intervals are more informative than the simple results of hypothesis tests (where we decide “reject H0” or “don’t reject H0”) since they provide a range of plausible values for the unknown parameter.

A confidence interval for a mean specifies a range of values within which the unknown population parameter, in this case the mean, may lie. These intervals may be calculated by, for example, a producer who wishes to estimate his mean daily output; a medical researcher who wishes to estimate the mean response by patients to a new drug; etc.

 

margin of error

95% of sample means fall within 1.96 standard errors from the population mean.

98% of sample means fall within 2.33 standard errors from the population mean.

 

levels of likelihood

critical region

if falls into the crtical region, it can be concluded that most likely we do not get this sample mean by chance.

the critical region defines unlikely values if the null hypothesis is true.

z-critical value

when we do the statical test, we set up our own criteria to make a decision

 

two-tailed test

 

t-test

we reject the null hypothesis when p value is less than the a value.

cohen’s d:

standardized mean difference that measures the distance between means in standardized units

margin of error

dependent t-test for paired samples:

same subject take the test twice,

within-subject:

  • two conditions: each subject is assigned two condition in random order
  • pre-test, post-test
  • growth over time- longitudinal study 

statistical significance

  • reject the null
  • results are not likely due to chance – sampling error

“statistically significant” finding

When a statistic is significant, it simply means that you are very sure that the statistic is reliable. It doesn’t mean the finding is important or that it has any decision-making utility.

To say that a significant difference or relationship exists only tells half the story. We might be very sure that a relationship exists, but is it a strong, moderate, or weak relationship? 

After finding a significant relationship, it is important to evaluate its strength. Significant relationships can be strong or weak. Significant differences can be large or small. It just depends on your sample size.

One-Tailed and Two-Tailed Significance Tests

When your research hypothesis states the direction of the difference or relationship, then you use a one-tailed probability. For example, a one-tailed test would be used to test these null hypotheses: Females will not score significantly higher than males on an IQ test.

A two-tailed test would be used to test these null hypotheses: There will be no significant difference in IQ scores between males and females.

Procedure Used to Test for Significance

Whenever we perform a significance test, it involves comparing a test value that we have calculated to some critical value for the statistic. It doesn’t matter what type of statistic we are calculating (e.g., a t-statistic, a chi-square statistic, an F-statistic, etc.), the procedure to test for significance is the same.

  1. Decide on the critical alpha level you will use (i.e., the error rate you are willing to accept).
  2. Conduct the research.
  3. Calculate the statistic.
  4. Compare the statistic to a critical value obtained from a table.

If your statistic is higher than the critical value from the table:

  • Your finding is significant.
  • You reject the null hypothesis.
  • The probability is small that the difference or relationship happened by chance, and p is less than the critical alpha level (p < alpha ).

via http://www.statpac.com/surveys/statistical-significance.htm

The formula for calculating margin of error is made for two-tailed tests, i.e. while calculating margin of error, we only take one side of t=0 into account. If you remember, when we were doing that example, for calculating margin of error on a two-tailed test, we didn’t take twice the t-critical (or t-critical positive minus t-critical negative), we took only one t-critical. Now, when we are calculating t-critical for a one tailed test, how can we use the same margin of error formula, to remove the mental burden of remembering another formula? So, we assume that it’s a two-tailed test. But how does that work out? It works out as the total critical area in both tails for the same alpha value would equal the critical area in a one-tailed test. Let’s say we take -1.711 as our t-critical for calculating margin of error, then this is same as doing a two-tailed test, but now the total alpha value changes to 0.05*2=0.1. Now to keep the total alpha value (or the total critical area same for both tests) as 0.05, we think the test as a two-tailed test and use the same formula for calculating the margin of error we used for a two-tailed test. Does that make sense?

when do we use t-test rather z-test?

Z-test and t-test are basically the same; they compare between two means to suggest whether both samples come from the same population. There are however variations on the theme for the t-test. If you have a sample and wish to compare it with a known mean (e.g. national average) the single sample t-test is available. If both of your samples are not independent of each other and have some factor in common, i.e. geographical location or before/after treatment, the paired sample t-test can be applied. There are also two variations on the two sample t-test, the first uses samples that do not have equal variances and the second uses samples whose variances are equal.

 

hypothesis test

 for example, claiming that a new drug is better than the current drug for treatment of the same symptoms.

In each problem considered, the question of interest is simplified into two competing claims / hypotheses between which we have a choice; the null hypothesis, denoted H0, against the alternative hypothesis, denoted H1.

These two competing claims / hypotheses are not however treated on an equal basis: special consideration is given to the null hypothesis.

We have two common situations:

  1. The experiment has been carried out in an attempt to disprove or reject a particular hypothesis, the null hypothesis, thus we give that one priority so it cannot be rejected unless the evidence against it is sufficiently strong. For example,
    H0: there is no difference in taste between coke and diet coke
    against
    H1: there is a difference.
  1. If one of the two hypotheses is ‘simpler’ we give it priority so that a more ‘complicated’ theory is not adopted unless there is sufficient evidence against the simpler one. For example, it is ‘simpler’ to claim that there is no difference in flavour between coke and diet coke than it is to say that there is a difference.

The hypotheses are often statements about population parameters like expected value and variance; for example H0 might be that the expected value of the height of ten year old boys in the Scottish population is not different from that of ten year old girls. A hypothesis might also be a statement about the distributional form of a characteristic of interest, for example that the height of ten year old boys is normally distributed within the Scottish population.

The outcome of a hypothesis test test is “Reject H0 in favour of H1” or “Do not reject H0”.

Null Hypothesis

We give special consideration to the null hypothesis. This is due to the fact that the null hypothesis relates to the statement being tested, whereas the alternative hypothesis relates to the statement to be accepted if / when the null is rejected.

The final conclusion once the test has been carried out is always given in terms of the null hypothesis. We either “Reject H0 in favour of H1” or “Do not reject H0”; we neverconclude “Reject H1”, or even “Accept H1”.

If we conclude “Do not reject H0”, this does not necessarily mean that the null hypothesis is true, it only suggests that there is not sufficient evidence against H0 in favour of H1. Rejecting the null hypothesis then, suggests that the alternative hypothesis may be true.

Type I Error

In a hypothesis test, a type I error occurs when the null hypothesis is rejected when it is in fact true; that is, H0 is wrongly rejected.

Type II Error

In a hypothesis test, a type II error occurs when the null hypothesis H0, is not rejected when it is in fact false. For example, in a clinical trial of a new drug, the null hypothesis might be that the new drug is no better, on average, than the current drug; i.e.H0: there is no difference between the two drugs on average.

 Critical Value(s)

The critical value(s) for a hypothesis test is a threshold to which the value of the test statistic in a sample is compared to determine whether or not the null hypothesis is rejected.

The critical value for any hypothesis test depends on the significance level at which the test is carried out, and whether the test is one-sided or two-sided.

Critical Region

The critical region CR, or rejection region RR, is a set of values of the test statistic for which the null hypothesis is rejected in a hypothesis test. That is, the sample space for the test statistic is partitioned into two regions; one region (the critical region) will lead us to reject the null hypothesis H0, the other will not. So, if the observed value of the test statistic is a member of the critical region, we conclude “Reject H0; if it is not a member of the critical region then we conclude “Do not reject H0“.

Significance Level

The significance level of a statistical hypothesis test is a fixed probability of wrongly rejecting the null hypothesis H0, if it is in fact true.

It is the probability of a type I error and is set by the investigator in relation to the consequences of such an error. That is, we want to make the significance level as small as possible in order to protect the null hypothesis and to prevent, as far as possible, the investigator from inadvertently making false claims.

The significance level is usually denoted by alpha
Significance Level = P(type I error) = alpha

 Usually, the significance level is chosen to be 0.05 (or equivalently, 5%).

 P-Value

The probability value (p-value) of a statistical hypothesis test is the probability of getting a value of the test statistic as extreme as or more extreme than that observed by chance alone, if the null hypothesis H0, is true.

It is the probability of wrongly rejecting the null hypothesis if it is in fact true.

The p-value is compared with the actual significance level of our test and, if it is smaller, the result is significant. That is, if the null hypothesis were to be rejected at the 5% signficance level, this would be reported as “p < 0.05”.

Small p-values suggest that the null hypothesis is unlikely to be true. The smaller it is, the more convincing is the rejection of the null hypothesis. It indicates the strength of evidence for say, rejecting the null hypothesis H0, rather than simply concluding “Reject H0‘ or “Do not reject H0“.

One-sided Test

A one-sided test is a statistical hypothesis test in which the values for which we can reject the null hypothesis, H0 are located entirely in one tail of the probability distribution.

In other words, the critical region for a one-sided test is the set of values less than the critical value of the test, or the set of values greater than the critical value of the test.

A one-sided test is also referred to as a one-tailed test of significance.

The choice between a one-sided and a two-sided test is determined by the purpose of the investigation or prior reasons for using a one-sided test.

Example

Suppose we wanted to test a manufacturers claim that there are, on average, 50 matches in a box. We could set up the following hypotheses
H0: µ = 50,
against
H1: µ < 50 or H1: µ > 50
Either of these two alternative hypotheses would lead to a one-sided test. Presumably, we would want to test the null hypothesis against the first alternative hypothesis since it would be useful to know if there is likely to be less than 50 matches, on average, in a box (no one would complain if they get the correct number of matches in a box or more).
Yet another alternative hypothesis could be tested against the same null, leading this time to a two-sided test:
H0: µ = 50,
against
H1: µ not equal to 50
Here, nothing specific can be said about the average number of matches in a box; only that, if we could reject the null hypothesis in our test, we would know that the average number of matches in a box is likely to be less than or greater than 50.
Two-Sided Test

A two-sided test is a statistical hypothesis test in which the values for which we can reject the null hypothesis, H0 are located in both tails of the probability distribution.

In other words, the critical region for a two-sided test is the set of values less than a first critical value of the test and the set of values greater than a second critical value of the test.

A two-sided test is also referred to as a two-tailed test of significance.

The choice between a one-sided test and a two-sided test is determined by the purpose of the investigation or prior reasons for using a one-sided test.

Example

Suppose we wanted to test a manufacturers claim that there are, on average, 50 matches in a box. We could set up the following hypotheses
H0: µ = 50,
against
H1: µ < 50 or H1: µ > 50
Either of these two alternative hypotheses would lead to a one-sided test. Presumably, we would want to test the null hypothesis against the first alternative hypothesis since it would be useful to know if there is likely to be less than 50 matches, on average, in a box (no one would complain if they get the correct number of matches in a box or more).
Yet another alternative hypothesis could be tested against the same null, leading this time to a two-sided test:
H0: µ = 50,
against
H1: µ not equal to 50
Here, nothing specific can be said about the average number of matches in a box; only that, if we could reject the null hypothesis in our test, we would know that the average number of matches in a box is likely to be less than or greater than 50.

via http://www.stats.gla.ac.uk/steps/glossary/hypothesis_testing.html#h0

 

 

生活在西班牙

自己动手丰衣足食

BlueAsteroid

Just another WordPress.com site

Jing's Blog

Just another WordPress.com site

Start from here......

我的心情魔方

天才遠私廚

希望能做一個分享各種資訊的好地方

语义噪声

西瓜大丸子汤的博客

笑对人生,傲立寰宇

Just another WordPress.com site

Where On Earth Is Waldo?

A Project By Melanie Coles

the Serious Computer Vision Blog

A blog about computer vision and serious stuff

Cauthy's Blog

paper review...

Cornell Computer Vision Seminar Blog

Blog for CS 7670 - Special Topics in Computer Vision

datarazzi

Life through nerd-colored glasses

Luciana Haill

Brainwaves Augmenting Consciousness

槑烎

1,2,∞

Dr Paul Tennent

and the university of nottingham

turn off the lights, please

A bunch of random, thinned and stateless thoughts around the Web