get lighter/darker shades of color Android


Color choosers are a dime a dozen online, but is a very nice one. Even then, finding lighter and darker colors in Photoshop is somewhat unintuitive due to it using HSV rather than HSL for its color picker. Its stated purpose is to allow you to specify a color and then find shades that are darker and lighter than that color. HSV(又称HSB)表示方法。它把颜色分为三个参量,一个是色相Hue,具体表示在色相环上的一种纯色,一个是饱和Saturation,具体表示纯色在颜色中的百分比,当S=1时,表示颜色最纯,当S=0时,表示灰度值。一个是亮度Value,表示颜色的亮度,当V=0时,表示黑色。 HSV颜色系统在不破坏图像结构的基础上更该颜色方面起着不小的作用。直接更改按钮的各个颜色的色相值,这样由于是整体更改颜色的色相值,各种颜色搭配还是比较协调的。



RGB 诉求于人眼对色彩的感应,YUV则着重于视觉对于亮度的敏感程度,Y 代表的是亮度,UV 代表的是彩度(因此黑白电影可省略UV,相近于RGB),分别用Cr和Cb来表示,因此YUV的记录通常以 Y:UV 的格式呈现。 第一个方案,是把 RGB 的值求和,然后取一个值,当和大于等于此值就为浅色: if ($R + $G + $B >= 450) { // add shadow } 这个算法很傻很简单,但效果不理想,特别是在 B 较大的时候,估计是眼睛对 RGB 各种颜色的敏感度都不一样。 可以使用YUV颜色编码来判断颜色的深浅。Y 是明亮度(灰阶),因此只需要获得 Y 的值而判断他是否足够亮就可以了:$grayLevel = $R * 0.299 + $G * 0.587 + $B * 0.114; if ($grayLevel >= 192) { // add shadow } 这个效果就是类似去 Photoshop 的去色功能


色差简单来说就是颜色的差别, 定量表示的色知觉差异。从明度色调彩度这三种颜色属性的差异来表示。明度差表示深浅的差异,色调差表示色相的差异(即偏红或偏蓝等),彩度差表示鲜艳度的差异。通过明度(L)、色调(A)和彩度(B)表示的颜色模型,称为LAB颜色模型,区别于RGB和CMYK颜色模型。


色差计算公式 A good way is to convert it to HSL color space, adjust the “lightness” component, and convert back to RGB. Another option is to use YUV color space, for which the calculations are easier. In YUV color space you can adjust darkness by changing the Y value.

visual search

The visual search takes your drawing as a starting point and simplifies it. It uses the shapes and the colour tones in your image to find other images in the database that have similar shapes or tones. This explains why some drawings can produce really strange results.

The visual search is not meant to be an accurate search tool. ItThe visual search takes your drawing as a starting point and sim
plifies it. It uses the shapes and the colour tones in your image to find other images in the database that have similar shapes or tones. This explains why some drawings can produce really strange results.

You will get the best results from drawing simple shapes using just one or two colours.

what’s the best choice for colormap

This is a difficult question because the best choice depends on the viewer’s task, on whether another visualization technique such as a height field is used in conjunction with color, and on the frequency content and noise within the data displayed.

Although the rainbow color map is universally inferior to other color maps, there is no color map that is better than all other maps in all circumstances.

The purpose of visualization is to effectively convey information to human viewers.

The selection of the best color map depends so critically on the data set and addressed questions that there is not a single best choice, but rather a collection of sets with different characteristics. The best solution would present the user with a choice whenever a color map is created, listing best types for each circumstance.

viewers can see details more readily when luminance contrast is present than when it is not.

Luminance is based on inputs from only the red and green channels—making it impossible to generate a uniform-luminance rainbow scale including deep blue.

The most obvious perceptually ordered color map with luminance contrast is the gray-scale color map.

Unfortunately, the early visual system converts from absolute brightness to brightness relative to surround, which distorts readings enough to produce errors of up to 20 percent of the entire scale



What’s the Difference between a Hue, Tint, Shade and Tone ?


What’s the difference between a Hue and a Color? Most people, even the pros, get confused about this. Basically they mean the same thing and can be used interchangeably.

Most Color Wheels only show bright colors which can create confusion. It’s not always easy to see that every color, even black, has a Primary, Secondary or Tertiary Color as its root.

Most Color Wheels only show bright colors which can create confusion. It’s not always easy to see that every color, even black, has a Primary, Secondary or Tertiary Color as its root.


What's a Hue

These are the family of twelve purest and brightest colors.

  • Three Primary Colors
  • Three Secondary Colors
  • Six Tertiary Colors

They form the full spectrum of colors which progress around the Primary Color Wheel in gradual increments.

With just these twelve colors, you can literally mix an infinite number of color schemes. Most of the time you will modify these twelve basic hues by mixing in other colors.

But nothing is stopping you from using them full-strength. This multi-color scheme would be bold, cheerful and exciting. It would be great in a child’s playroom. Bright, bold selections can also work to grab attention in advertising and marketing graphics. Creating a painting with these would be a little jarring.


What's a Tint

Every individual color on the Basic Color Wheel can be altered in three ways by Tinting, Shading or Toning. And that’s before we even think about mixing two colors together.

Let’s start with Lightening the twelve basic colors to createTints. 

A Tint is sometimes called a Pastel. Basically it’s simply any color with white added.  A tint is lighter than the original color.

That means you can go from an extremely pale, nearly white to a barely tinted pure hue. Artists often add a tiny touch of white to a pure pigment to give the color some body. So for example a bright Red can quickly become a bright Pink.

A color scheme using Tints is usually soft, youthful and soothing, especially the lighter versions. All tints work well in in feminine environments. You often see advertising, marketing and websites use pale and hot pastels if they are targeting women as a demographic. In painting you might save your lightest pastels for the focal point or use pastels for the entire painting.


What's a Shade

So now that you know how to lighten, what’s the easiest way to make your colors darker?

A Shade is simply any color with black added.  A shade is darker than the original color.

Just as with making tints, you can mix any of the twelve pure colors together.Then simply add any amount of black and you have created a shade of the mixture.

That means you can go from an extremely dark, nearly black to a barely shaded pure hue.

Most artists use black sparingly because it can quickly destroy your main color. Some artists prefer not to use it at all. Instead they understand the rules of color well enough to make their own black mixtures.

Shades are deep, powerful and mysterious. Be careful not to use too much black as it can get a little overpowering. These darks work well in a masculine environment. They are best used as dark accents in art and marketing graphics.


What's a Tone

Now that you understand how to lighten and darken your twelve colors how do you tone them down? A tone is softer than the original color.

Almost every color we see in our day-to-day world has been toned either a little or a lot. This makes for more appealing color combinations.

A Tone is created by adding both White and Black which is grey. Any color that is “greyed down” is considered a Tone.

Tones are somehow more pleasing to the eye. They are more complex, subtle and sophisticated.

Artists usually mix a little grey in every paint mixture to adjust the value and intensity of their pigment. Tones are the best choice for most interior decorating because they’re more interesting. They work well in any Color Scheme you might plan


Saturation is a color term commonly used by (digital / analog) imaging experts.

Saturation is usually one property of three when used to determine a certain color and measured as percentage value.
Saturation defines a range from pure color (100%) to gray (0%) at a constant lightness level. A pure color is fully saturated.
From a perceptional point of view saturation influences the grade of purity or vividness of a color/image. A desaturated image is said to be dull, less colorful or washed out but can also make the impression of being softer.
We will clear up the term saturation from a color mixing point of view in the color spaces section.

Lightness is a color term commonly used by (digital / analog) imaging experts.

Lightness is usually one property of three when used to determine a certain color and measured as percentage value.
Lightness defines a range from dark (0%) to fully illuminated (100%). Any original hue has the average lightness level of 50%.
A painter might say lightness is the range from fully shaded to fully tinted.
You can lighten or darken a color by changing its lightness value.

Chromatic Signal / Chromaticity / Chroma

This family of color terms is commonly used by (digital / analog) imaging andvideo experts.

In the previous section we learned that color perception is a result of achromatic and chromatic signals.

We can therefore define a chromatic signal as the component of color perception that is not achromatic, i.e. any deviation from neutral-color perception (dark, grayscale, illuminated).

The chromatic intensity or chromaticity is the intensity of the chromatic signal contributing to color perception. Chromaticity is similar to saturation since color / an image with a low chromaticity value is not very colorful.

Chroma is a component of a color model. There’s a blue-yellow and a red-green chroma component.
Intensity / Luminosity / Luma

In general, intensity is a synonym for magnitude, degree or strength. It can therefore be used in conjunction with any color property. Nevertheless, it carries special meaning in certain contexts.

For painters the meaning of intensity is equivalent to the meaning ofsaturation.
For physicists intensity refers to different aspects of radiation.
When speaking of light, the intensity can mean the number of photons a light source emits.

The following sources provide a deeper insight:
– Luminosity
– Intensity
– Luminosity Function
– Lumen

Luma (%) is the intensity of the achromatic signal contributing to our color perception.
Brightness / (relative) Luminance

Brightness is an attribute of our perception which is mainly influenced by a color’s lightness. This is probably why brightness and lightness are often mixed up. Brightness is not a color property, if used “correctly”.

For one color of specific hue the perception of brightness is also more intense, if we increase saturation. A higher level of saturation makes a color look brighter.
In relation to other colors the brightness intensity of a color is also influenced by its hue. We can then speak of (relative) luminance to refer to brightness.

It’s very important to know more about luminance.


A grayscale is a series of neutral colors, ranging from black to white, or the other way around. Each step’s color value is usually shifted by constant amounts.

A grayscale color can be determined by a value of a one-dimensional color space:
On a white surface (e.g. paper) the grayscale color’s value equals to therelative intensity of black (ink) applied to the medium.
On a black surface (e.g. monitor) the grayscale color’s value equals to therelative intensity of white (light) applied to the medium.

Primary Colors

Primary Colors

In theory, the Primary Colors are the root of every other hue imaginable. The primary pigments used in the manufacture of paint come from the pure source element of that Hue. There are no other pigments blended in to alter the formula.

Think of the three Primaries as the Parents in the family of colors.

In paint pigments, pure Yellow, pure Red, and pure Blue are the only hues that can’t be created by mixing any other colors together. Printer inks and digital primaries are referred to as Yellow, Magenta and Cyan.

Secondary Colors

Secondary Colors

When you combine any two of the Pure Primary Hues, you get three new mixtures called Secondary Colors.

Think of the three Secondaries as the Children in the family of colors.

  •  Yellow Red ORANGE
  • Red + Blue VIOLET or PURPLE
  • Blue + Yellow GREEN

Tertiary Colors

Tertiary Colors

When you mix a Primary and its nearest Secondary on the Basic Color Wheel you create six new mixtures called Tertiary colors.

Think of the six Tertiary Colors as the Grandchildren in the family of colors, since their genetic makeup combines a Primary and Secondary color.

  • Yellow + Orange YELLOW-ORANGE
  • Red + Orange RED-ORANGE
  • Red + Violet RED-VIOLET
  • Blue + Violet = BLUE-VIOLET
  • Blue + Green = BLUE-GREEN
  • Yellow + Green = YELLOW-GREEN

leven basic colour terms

30 day blog challenge - day 3 - basic color terms. This opens a new browser window.

When Brent Berlin and Paul Kay introduced basic colour terms in their 1969 book ‘Basic color terms: Their Universality and Evolution’, it was the start of a new way of thinking about colour terms and colour naming.

Berlin and Kay 1969 study was a compilation of colour terms in 98 languages from around the world. Interestingly, in their cross-cultural research these main eleven colours could be identified within the many languages.

yellow – orange – red – purple – blue – green – pink – brown – grey – black – white

red, orange, yellow, green, blue, purple, pink, brown, grey, black and white.

A picture worth many words. The path to a more colorful language, according to Berlin and Kay (1969).

What it says is this. If a language has just two color terms, they will be a light and a dark shade – blacks and whites. Add a third color, and it’s going to be red. Add another, and it will be either green or yellow – you need five colors to have both. And when you get to six colors, the green splits into two, and you now have a blue. What we’re seeing here is a deeply trodden road that most languages seem to follow, towards greater visual discernment (92 of their 98 languages seemed to follow this basic route).


Why should different cultures draw the same boundaries? If we speak different languages with largely independent histories, shouldn’t our ancestors have carved up the visual atlas rather differently?

First, cultures are quite different in how their words paint the world. Take a look at this interactive map. For the 110 cultures, you can see how many basic words they use for colors. To the Dani people who live in the highlands of New Guiniea, objects comes in just two shades. There’s mili for the cooler shades, from blues and greens to black, and mola for the lighter shades, like reds, yellows and white. Some languages have just three basic colors, others have 4, 5, 6, and so on. There’s even a debate as to whether the Pirahã tribe of the Amazon have any specialized color words at all! (If you ask a Pirahã tribe member to label something red, they’ll say that it’s blood-like).

But there’s still a pattern hidden in this diversity. You might be wondering what happened to the cartoon picture of languages. Is there still a main road? Or are there languages that travel off the beaten path? The answer is yes, to both questions.

Goodbye yellow brick road. A more refined picture of how languages name colors.

The picture looks like a mess, but keep in mind that five out of six languages surveyed follow the central route. So here’s the story. You start with a black-and-white world of darks and lights. There are warm colors, and cool colors, but no finer categories. Next, the reds and yellows separate away from white. You can now have a color for fire, or the fiery color of the sunset. There are tribes that have stopped here. Further down, blues and greens break away from black. Forests, skies, and oceans now come of their own in your visual vocabulary. Eventually, these colors separate further. First, red splits from yellow. And finally, blue from green. The forest unmingles from the sky. In the case of Japan, that last transition essentially happened in modern history!


Work done by Colin Ware (building off the work done by Kay and others(Ware, Colin (2000) Information Visualization, Perception for Design, San Francisco: Morgan Kaufman.) and his own work on perception, has produced maximum set of 12 colors that can be accurately differentiated by people with standard vision without errors. These colors (shown below) are: 1. Red, 2. Green, 3. Yellow, 4. Blue, 5. Black, 6. White, 7. Pink, 8. Cyan, 9. Gray,10. Orange, 11. Brown, 12. Purple.

Colors from Set 1 should be used before colors from Set 2.
These colors are the most easily identifiable colors based on perceptual research, and can be used to code data with a high degree of decoding accuracy in humans that do not experience color blindness.  The 12 colors above were then taken by me as a starting point in which to extract a larger set of colors in order to assign one color for every letter in the alphabet.
We tend to use qualifiers (darker, lighter, -ish, etc.) to describe the differences in colors instead of giving colors individual names.  (see Munsell Color System.)
People can distinguish the difference in hue, saturation and value (brightness) of in excess of a million color combinations (Halsey and Chapanis, 1951; Kaiser and Boynton, 1989) when colors are compared side by side in optimal lighting conditions.
 This being said, color memory in people is not very good, nor is the ability to discriminate among specific colors that are even remotely similar when separated by time.  In this situation the number of colors that can be readily identified drops to 12.

How the 26 colors were chosen

RGB hexadecimal value

You cannot specify these colours in HTML and CSS by their colour name but you can use their RGB hexadecimal value, eg:

    <font color="#800080">and in CSS you can also use their RGB decimal values, eg:

    P { rgb(128,0,128); }

Colour Words and Colour Categorization

500+ colors

The 330 Munsell chips in the WCS color chart. Of the chips, 320 consist of 40 hues spanning the color circle (arranged horizontally), each printed in 8 values (arranged vertically). An additional 10 chips are achromatic colors (sidebar).
The Munsell color table
the Munsell color system is a color space that specifies colors based on three color dimensions: hue, value (lightness), and chroma (color purity). It was created by Professor Albert H. Munsell in the first decade of the 20th century and adopted by the USDA as the official color system for soil research in the 1930s.


对于数字电子多媒体领域来说,我们经常接触到的色彩空间的概念,主要是RGB , YUV这两种(实际上,这两种体系包含了许多种具体的颜色表达方式和模型,如sRGB, Adobe RGB, YUV422, YUV420 …), RGB是按三基色加光系统的原理来描述颜色,而YUV则是按照 亮度,色差的原理来描述颜色。

我们比较常接触到的就包括 RGB / CMYK / YIQ / YUV / HSI等等。

YIQ色彩空间通常被北美的电视系统所采用,属于NTSC(National Television Standards Committee)系统。这里Y不是指黄色,而是指颜色的明视度(Luminance),即亮度(Brightness)。其实Y就是图像的灰度值(Gray value),而I和Q则是指色调(Chrominance),即描述图像色彩饱和度的属性。在YIQ系统中,Y分量代表图像的亮度信息I、Q两个分量则携带颜色信息I分量代表从橙色到青色的颜色变化,而Q分量则代表从紫色到黄绿色的颜色变化。将彩色图像从RGB转换到YIQ色彩空间,可以把彩色图像中的亮度信息与色度信息分开,分别独立进行处理。
Y = 0.299R + 0.587G + 0.114B
I = 0.596R – 0.275G – 0.321B
Q = 0.212R – 0.523G + 0.311B


YUV(亦称YCrCb)是被欧洲电视系统所采用的一种颜色编码方法(属于PAL),是PAL和SECAM模拟彩色电视制式采用的颜色空间。其中的Y,U,V几个字母不是英文单词的组合词,Y代表亮度,uv代表色差,u和v是构成彩色的两个分量。在现代彩色电视系统中,通常采用三管彩色摄影机或彩色CCD摄影机进行取像,然后把取得的彩色图像信号经分色、分别放大校正后得到RGB,再经过矩阵变换电路得到亮度信号Y和两个色差信号B-Y(即U)、R-Y(即V),最后发送端将亮度和色差三个信号分别进行编码,用同一信道发送出去。这种色彩的表示方法就是所谓的YUV色彩空间表示。采用YUV色彩空间的重要性是它的亮度信号Y和色度信号U、V是分离的。如果只有 Y信号分量而没有U、V信号分量,那么这样表示的图像就是黑白灰度图像。彩色电视采用YUV空间正是为了用亮度信号Y解决彩色电视机与黑白电视机的相容问题,使黑白电视机也能接收彩色电视信号。

“Y”表示明亮度(Luminance或Luma),也就是灰阶值;而“U”和“V” 表示的则是色度(Chrominance或Chroma),作用是描述影像色彩及饱和度,用于指定像素的颜色。“亮度”是透过RGB输入信号来建立的,方法是将RGB信号的特定部分叠加到一起。“色度”则定义了颜色的两个方面─色调与饱和度,分别用Cr和CB来表示。其中,Cr反映了GB输入信号红色部分与RGB信号亮度值之间的差异。而CB反映的是RGB输入信号蓝色部分与RGB信号亮度值之同的差异。

 RGB -> YUV:

        实际上也就是:Y=0.30R+0.59G+0.11B , U=0.493(B-Y) , V=0.877(R-Y)


HSV for example uses the three components Hue, Saturation and Value, that very roughly speaking can be thought of as a color’s tint, vivacity and brightness (the terms used here have a very precise meaning in the color theory).
As an example, look at this “slice” of 3D HSV space:
hsv slice

Wavelet Decomposition in Computer Vision

Most image querying systems rely on a metric to compare images. The intent is to have an image metric that is fast to compute and produces good results even with distorted query images. The work by Salesin, he introduces a multiresolution image querying method that is designed to address this problem. The method is based on wavelets.

Wavelets are coefficients achieved by applying a function repeatedly to a matrix that creates signatures for each matrix. Salesin defines an algorithm that uses truncated, quantized versions of the wavelet decompositions (i.e, signatures). These signatures are based on the most significant coefficients computed using a two-dimensional Haar wavelet decomposition. The Haar wavelet decomposition basically averages the rows and columns of each image for sqrt(n) times. This eventually produces the average color of the image, as well as other defining signatures. The signatures are quantized such that +1 is used to represent a large positive value, and -1 is used for large negative values. The authors found that this sped up the search process and reduced storage requirements. This is a multiresolution approach, meaning that queries do not have to have the same resolution as potential targets. This method can be used to search through images that have been compressed using wavelets. This is very important since most databases do not have enough space to store full images. Thus indexes are created and images are kept outside of main memory and disk space.

 The user may choose the color, shape, and texture or draw a sketch. The user may perform query by example, in which the user can choose an image returned in response to a query, and ask the system to return other similar images. Also the use of keyword search provide more accurate search results.

QBIC addresses two problems inherent in image searching.

  1. The first problem is that the metric for feature vectors is non-Euclidean. QBIC solved the problem by proving that the non-Euclidean distance can be bounded by a Euclidean distance.
  2. Secondly, the vectors representing the images have high dimensionality. This problem was solved by using the Karhunen Loeve transform to reduce the dimensionality of the vectors. One drawback from these solutions is that they may allow false hits, but they would not lead to false dismissals.

Also the interface in which images are displayed does not reveal what other images are in the database other than the ones being currently displayed.

Do not automatically reduce the all the features into a few that can be displayed in two or three dimension, which might potentially lost the information.

allowing the user to select the dimensions of interest the user can gain an understanding about the spatial relationship in a scatterplot when they are exploring the database.

IR relys on a metric to compare images, the intent is to have a metric that is fast to compute and produces good results, even with distorted query images. The multi-resolution image querying method is designed to address this problem.

Wavelets are coefficients achieved by applying a function repeatedly to a matrix that creates signatures for each matrix.

The Haar wavelet decomposition basically averages the rows and columns of each image for sqrt(n) times. This eventually produces the average color of the image, as well as other defining signatures. The signatures are quantized such that +1 is used to represent a large positive value, and -1 is used for large negative values.

This sped up the search process and reduced storage requirements. This is a multiresolution approach, meaning that queries do not have to have the same resolution as potential targets.

  • Haar wavelets are easy to compute and produce an orthogonal basis
  • The problem with Haar wavelet analysis is that the ease of computation produces more error in image reconstruction due to the blocking effect, where images look like a collection of blocks and lacks the smooth edges that were originally present.

In wavelet image retrieval systems, instead of storing the actual images in the database, each image is decomposed, and the n largest (in magnitude) coefficients for each image are saved.

When the user poses a query, the query image is also decomposed, creating a signature of n coefficients.

This signature is compared to those in the image database using an image query metric.

This metric is faster to compute than the L^1 or L^2 metrics. The query metric assigns variable weights to the coefficients that can be tuned according to the types of images involved in the queries. These weights can be determined using statistical techniques.

Simplify the query metric by only considering terms in the query signature that are non-zero. Therefore, queries without a lot of detail can be matched to more detailed images.

The reverse is not ture, a more detailed query image will not always match a less detailed target image.


Most search systems are static in nature (i.e, what you asked for is what you get). However, the user may want to change his/her query after seeing the system’s results.

TERPSI makes queries more interactive, by allowing users to add more images to a search key after an initial batch of images have been returned(Jones, 1997). The system retrieves images that it believes are most like the query image, which may not satisfy the user. For this reason, relevance feedback was implemented to improve the quality of the results. TERPSI implemented a relevance feedback system that helped the users fine tune their query specifications. For users who are artistically capable, TERPSI allowed them to draw an image which will act as the query image.


Haar Wavelet Transform

Original image


Image coefficients

Mask in coefficient order

Image coefficients quantized according to mask


Image Compression

The Haar transform can be used in lossy image compression:

1. Original image
 > > >
2. Haar coefficients

3. Quantized coefficients
 > > >
4. Reconstructed image

log-polar coordinates

Image analysis

Already at the end of the 1970s, applications for the discrete spiral coordinate system were given in image analysis. To represent an image in this coordinate system rather than in Cartesian coordinates, gives computational advantages when rotating or zooming in an image. Also, the photo receptors in the retina in the human eye are distributed in a way that has big similarities with the spiral coordinate system.[2] It can also be found in the Mandelbrot fractal (see picture to the right).

Log-polar coordinates can also be used to construct fast methods for the Radon transform and its inverse.

retina-like visual sensor

advantages of polar and log-polar mapping (for visual navigation)

apart from the shape invaiance property to scaling and rotations stems from the considerable data reduction obtained with the non-uniform sampling, a high resolution in the central part of the filed of view, which corresponds to the focus of attention.

bins that are uniform in log-polar space, making the descriptor more sensitive to positions of nearby sample points than to those points farther away.

corresponding points on two similar shapes will tend to have similar shape contexts.

Shape context at a given point on a shape is invariant under translation and scaling, shape contexts are not invariant under arbitrary affine transforms, but the log-polar binning ensures that for small locally affine distortions due to pose change, intra-category variation etc., the change of shape context is correspondingly small.

since sc gathers coarse info from the entire shape, it is relatively insensitive to the occlusion of any particular part.

Each shape context is a log-polar histogram of the coordinates of the rest of the point set measured using the reference point as the origin



Adding content-based (visual) image searching features to Gallery could be done either by embedding imgSeek code or connecting to the imgSeek server via XML-RPC/SOAP.

There are roughly three levels of image representation used for CBIR:
1. Iconic – exact pixel values (e.g. Fast Multi-Resolution Image Querying, SIGGRAPH 1995)
2. Compositional – overall image appearance
3. Objects – things depicted in the image, their properties, and their relationships, e.g. Shape-based retrieval of images, Del Bimbo Elastic Shape Matching, 

The core of the algorithm is some (GPL) code taken from ImgSeek, which computes features based on an algorithm described in the paper Fast Multiresolution Image Querying. These features consist of 41 numbers for each colour channel (the code currently works in the YIQ colourspace).

40 of these numbers will correspond to the 40 most significant wavelets found in a wavelet decomposition of the image. The final number is based on the average luminosity of the image, and is basically a compensation factor.  The image similarity is given by the sum of the weights for the most significant wavelet features, minus a component based on the average luminosity.




Just another site

Jing's Blog

Just another site

Start from here......







Just another site

Where On Earth Is Waldo?

A Project By Melanie Coles

the Serious Computer Vision Blog

A blog about computer vision and serious stuff

Cauthy's Blog

paper review...

Cornell Computer Vision Seminar Blog

Blog for CS 7670 - Special Topics in Computer Vision


Life through nerd-colored glasses

Luciana Haill

Brainwaves Augmenting Consciousness



Dr Paul Tennent

and the university of nottingham

turn off the lights, please

A bunch of random, thinned and stateless thoughts around the Web