hsv wheel color picker

via http://chiralcode.com/color-picker-for-android/

Single color picker

Color Picker is based on HSV color model. Using color wheel in the middle, user can adjust hue and saturation. Arc on the right side allows user to change value of the selected color. Arc on the left side shows currently selected color. This is how it looks like:

Android Color Picker Android Color Picker Android Color Picker

Multi color picker

More advanced version allows to pick several colors at once. It is really not easy to compose a palette of eye-catching colors. Changing only the hue while keeping saturation and color value on the same level gives nice effects and is easy to achieve.

Android Color Picker Android Color Picker Android Color Picker

Project is available at GitHub: https://github.com/chiralcode/Android-Color-Picker/

 

UberColorPicker Demo

UberColorPicker Demo - screenshot thumbnail UberColorPicker Demo - screenshot thumbnail

Superdry Color Picker

Superdry Color Picker Demo - screenshotSuperdry Color Picker Demo - screenshot

https://github.com/superdry/

Advertisements

【paper】Detecting and Sketching the Common

 

Given few images containing a common object, detect the common object and provide a simple and compact visual representation of that object, depicted by binary sketch.
shapecommon

how Shape Context use dataset for testing?

1. Original SC+TPS

1NN clssfifier with shape context dissimilarity as the distance measure:

  1. estimate affine transforms between the query shape and prototype
  2. apply the affine transform and recompute the shape contexts for the transfromed point set
  3. score the match by summing up the shape context distances between each point on a shape to its most similar point on other shape

Digit recognition

    • MNIST hand-written dataset: 60,000 training and 10,000 test digits, n=100(Canny edge), nd = 5, no = 12

3D object recognition

    • COIL-20 dataset: 72 vies from 20 common household objects

MPEG-7 shape silhouette database –

    • CE(core experiment)-shape-1 part B: 1400 images from 20 images*70 shapes

 Trademark Retrieval

    • Trademark are visually often best described by their shape info, 
    • Vienna code broadly categorizes trademarks, manually classify by perceptual similarity
    • 300 trademarks, n=300, 8 query trademarks, top 4 hitst

2. 

3. Inner distance shape context [pdf]

optional invariance

    • sampled n landmark points – larger n produce greater accuracy with less efficiency
    • size of histogram:

– nd: number of inner distance bins = 5, sometimes nd =8 can get better result

– n0: number of inner-angle bins = 12

– nr: number of relative orientation bins = 8

    • number of different starting points for alignment, for dynamic programming
      • larger k can improve the performance further,
    • penalty for one occlusion t, [0.25, 0.5] do not affect the results too much

Test Dataset

    • an articulated shape dataset40 images from 8 different objects, n = 200, nd = 5, no = 12, k = 1

art

Each object has 5 images articulated to different degrees

    • MPEG7 CE-Shape-11400 silhouette images from 70 classes, n = 100, nd = 8, no = 12, k = 8 

mpegTypical shape images from the MPEG7 CE-Shape-1, one image from each class.

complexity of shapes are mainly due to the part structures but not articulations

    • Kimia silhouettes: 
      • (a) 25 images from 6 classes, n = 100, nd = 5, no = 12, k = 4
      • (b) 99 images from 9 classes, n = 300, nd = 8, no = 12, k = 4

QQ截图20130103205058

    • ETH-80 data set: 80 objects from 8 classes,41 views for each objects, 3280 images in total. n = 128, nd = 8, no = 12, k = 16

ethleave-one-object-out cross-validation for test mode: for each image, comparing to all images from the other 79 objects

    • Swedish leaf dataset: 75 leaves from 15 species, n = 128, nd = 8, n0 = 12, nr = 8, k = 1

swEach species contains 25 training samples and 50 testing samples per species, the recognition result is 1-nearest-neighbor

    • Smithsonian leaf dataset343 leaves from 93 species, nd = 5,  nr = 8, no = 12, no use DP

smOne typical image from each species is shown

187 of them are used as the training set and 156 as the testing set

    • human motion silhouette dataset: human body matching

4. SC + Chamfer matching in cluttered scene

5. Partial shape matching [pdf]

 ETHZ Shape Classes dataset v1.2

INRIA horses datasett v1.03

6.Angular Partitioning Sketch-Based Image Matching [pdf]2005

  • ART-PHOTO BANK:
    • Model: 4000 full color heterogeneous images of various sizes (500 in 8 groups), which is a true-balanced combination of:
      • 250 art works, gained from the World Art Kiosk, California State University
      • 250 real natural photographs from set S3 of the MPEG-7 database
      • Each group contains 8 similar images created by rotation in steps of 45
    • Query: 400 sketches (100 in 4 groups):
      • hand-drawn black and white sketches similar to 100 arbitrary candidates from the model and their rotated versions

        (90 , 180 , and 270 )

      • scanned with 200 dpi resolution
      • each input query has eight similar images in the database, in the best case there are eight nonsimilar images in the retrieval list

7.Painting

  • Elastic Matching of User Sketches97 [pdf],
    • 100 test images: 22 Morandi paitings, 10 sacred pictures, and sample pictures

    • of diverse objects with dissimilar shapes

  • CBIR by shape matching06 [pdf], VRIMA [pdf]
    • 20 Morandi’s bottle paintings
    • each database image is stored as a collection of object shapes
    • each shape is represented by a list of 30 vertices
    • qualitative:
      • 25 smapled object images,m
      • 3 templates3
      • 15 people classify 25 paitings by the similarity with each of 3 templates
    • quantitative: precision and recall
      • 5 queries:
        • oblong shape
        • squared shape
        • round bottle shape
        • squared bottle shape
        • irregular shape5
  • Shape Similarity with Perceptual Distance and Effective Indexing00 [pdf]
      • test database: 1637 shapes of objects extracted from 20th century paintings.

      • Each shape has been sampled at 100 equally spaced points

      • M-tree indexing structure
      • qualitative effectiveness: measure to what extent the system agrees with human perception,
        • 22 sample images 

        • representing bottles

  • 3 reference bottle sketches
      • 42 people: for each sketch, assign a score [0,1] to its retrieved image
    • quantitative: precision and recall
    • occlusion test: three reference bottle sketches

Shape-sketch

Leafsnap 

shape-sketch

scene sketch – color drawings

hand drawing reflecting an image scene by color, sketch is more a cartoon-like drawing

Retrievr

Extract a multi-resolution wavelet fingerprint of the complete image comparing color and shape. 20 compression coefficients allows efficient retrieval.

shape sketch – line drawings

this kind of hand drawings provide the most natural extension of a language based word-level query, which is just a rough outline of an object focusing entirely on the shape.

  • Sketch is a thin line drawing focusing on shape, no appearance info, may contain one to many line stokes, define a valuable stroke point ordering.
  • Contour is a connected sequence of points, which may come from the input sketch
  • Contour fragment is a connected subset of a contour. an ordered list of points
  • Chord is a line joining two points of a region boundary, some could be used as shape descriptor:
    • histogram of distribution of chord lengths and angels
    • relative orientations between specifically chosen chords
    • used in Geometric hashing

user stroke and image edges may be over fragmented, and broken into multiple contours- so partial matching, e.g. The PArtial Contour and Efficient Matching (PACEM)

Goal:

efficient image retrieval in large database by modeling object of interest using a sketch of the object shape.

Core idea:

a bag of local fragment codewords, which are fragment prototypes that are obtained from edges in image database.

use these fragment shape(codewords) to describe both the sketch query and images

painting as query
it is limited by perceptual error in both shape and color, as well as by the artistic prowess and patience of the user.

Painted Scanned Target

Wavelet Transform

similar to the Fourier transform, but encodes both frequency and spacial information.

most used in area of image compression – saving the few largest wavelet coefficients for an image, throwing away all of the smaller coefficients, it is possible to recover a fairly accurate representation of image.

        20 coeffs                               100 coeffs                      400 coeffs       Original (16,000 coeffs)

by collecting 20 coefficients for each color channel, we distill a small signature for each of these images, save these signature and

via

Fast Multiresolution Image Querying  please see the paper.
retrieval by shape-sketch

 

 

 

 

 

After extracting shape context descriptor

shape context descriptors are then indexed by a locality-sensitive hashing data structure aiming to perform approximate k-NN search in high dimensional spaces in sub-linear time.

>>>Efficient Logo Retrieval Through Hashing Shape Context Descriptors.
M. Rusiñol and J. Lladós. In Proceedings of the Ninth IAPR International Workshop on Document Analysis Systems, DAS10, pages 215-222, 2010. [pdf]

CSC – color shape context

For each sampled point pi, shape context histogram Si, To add color info to SC: 

  • A circular mask is defined as the region of interest centered at pi, the size of the mask is computed with relation to the mean distance between all the points pairs in the shape.
  • The local color name histogram Ci
  • combination of both descriptors Si and Ci at each point of the shape –
    • Distance between two point d(pi, pj):multiplying the distance of the shape context descriptor and the local color descriptor
    • Matching between sketch&image: Given a set of local distances d(pi, pj) between all pairs of points, the
      final distance between the sketch query and the image is determined by minimizing the total cost of matching

bipartite graph matching approach that puts in correspondence points having similar shape and color descriptions.

  • Once all the n points in a shape are described by their shape context histogram, in order to match two shapes we have to find the point correspondences.
  • The simplest way to compute the matching among the two set of points is by using a bipartite graph matching approach
  • In order to obtain a more robust matching, the most usual techniques involve the computation of an affine transform that matches the set of points from one shape to another.

>>>>Perceptual Image Retrieval by Adding Color Information to the Shape Context DescriptorM. Rusinol, F. Nourbakhsh, D. Karatzas, E. Valveny and J. Llados Proceedings of the 20th International Conference on Pattern Recognition, IEEE Press, pp. 1594-1597, Istanbul, Turkey, 2010 [pdf]

Shapeme Histogram

shapeme histogram descriptor was proposed by Mori et al. []

This description was inspired by the bag-of-words model. The main idea is to do vector quantization in the space of the shape context of all interest points.

  • clustering stage of the shape context feature space: k-means algorithm
  • each shape context descriptor can be identified

    by the index of the cluster which it belongs to

  • Detection:
    • extract n sampled point from edge map, then get the shape context hi
    • then each shape context descriptor of the point is projected to the clustered space identified by single index Ii
    • the query image can be represented by a histogram coding the frequency of appearance of each of k shapeme indices
    • to find the matches, is to find the kNN in the spaece of shapeme histogram.

 

 

 

 

 

 

 

 

inner-distance for shape matching

inner-distance

the length of the shortest path within the shape boundary, to build shape descriptors.

  • insensitive to articulation
  • sensitive to part structures, a desirable property for complex shape comparison

Euclidean distance

does not consider whether the line segment crosses shape boundaries.

  • the inner-distance replaces the Euclidean distance to extend the shape context

object recognition – combine shape and texture

leaves from different species often share similar shapes but have different vein structures

Using the gradient information along the shortest path

Shape Context

1) Given the points on two shapes A and B, first the point correspondences are found through a weighted bipartite matching.

2) Then, TPS is used iteratively to estimate the transformation between them.

3) After that, the similarity D between A and B is measured as a weighted combination of three parts:
D = aDac + Dsc + bDbe

    • Dac measures the appearance difference
    • Dbe measures the bending energy
    • Dsc shape context distance, measures the average distance between a point on A and its most similar counterpart on B
    • a, b are weights (a = 1.6, b = 0.3

Inner-Distance Shape Context (IDSC)

【Color+Shape】 rather than Shape

Shape in an image refers to the shape of the regions in an image.

Shape is well defined concept used for the computing the similarity between the images rather than the texture and color. Shape deals with the spatial information of an image.

[Biederman. 1987] proved that natural objects can be recognized by their shape. Shapes of all the objects in an image are computed to identify the objects in the image.

  • Shapes with in the image can be represented in terms of curves, lines, eigen shapes, points, medial axis etc.
  • Shapes can be expressed in terms of various descriptors like moments, Fourier descriptors, geometric and algebraic invariants, polygons, polynomials, splines, strings, deformable templates, and skeletons

http://sgo-feeds-dev.blogspot.co.uk/2012/11/shape-context-rotation-invariance.html

shape context algorithm

I was trying to achieve rotation invariance for shape Context.
The general approach for shape context is
  • to compute distances and angles between each set of interest points in a given image.
  • Then bin into a histogram based on whether these calculated values fall into certain ranges.
You do this for both a standard and a test image.
  • To match two different images, from this you use a chi-square function to estimate a “cost” between each possible pair of points in the two different histograms.
  • Finally, use an optimization technique such as the hungarian algorithm to find optimal assignments of points and then sum up the total cost, which will be lower for good matches.
they say that to make the above approach rotation invariant,
you need to calculate each angle between each pair of points using the tangent vector as the x-axis. (ie http://www.cs.berkeley.edu/~malik/papers/BMP-shape.pdf page 513)

Learning grasping points with shape context

Color-shape context for object recognition

Advanced shape context for plant species identification using leaf image retrieval

【ACTIVE BASIS】the model learns to sketch

the main introduction is maintained by Ying Nian Wu,

  • How to represent a deformable template? 

The answer is no much beyond wavelet representation, just add some perturbations to the wavelet elements.
The perturbations and the coefficients of the wavelet elements are the hidden variables that seek to explain each individual training image.

Active basis.

Each Gabor wavelet element is illustrated by a thin ellipsoid at a certain location and orientation.

The upper half shows the perturbation of one basis element. By shifting its location or orientation or both within a limited range, the basis element (illustrated by a black ellipsoid) can change to other Gabor wavelet elements (illustrated by the blue ellipsoids). Because of the perturbations of the basis elements, the active basis represents a deformable template.

The model is generative and seeks to explain the images.

  • How to learn a deformable template from a set of training images?

The answer is no much beyond matching pursuit, just do it simultaneously on all the training images.
The intuitive concepts of “edges” and “sketch” emerge from this process.

    • learning by shared sketch algorithm

REFERENCE

  • Learning Active Basis Model for Object Detection and Recognition [pdf]
  • Wavelet, Active Basis, and Shape Script — A Tour in the Sparse Land 2010 [pdf]
  • Learning Active Basis Models by EM-Type Algorithms [pdf]

 

生活在西班牙

自己动手丰衣足食

BlueAsteroid

Just another WordPress.com site

Jing's Blog

Just another WordPress.com site

Start from here......

我的心情魔方

天才遠私廚

希望能做一個分享各種資訊的好地方

语义噪声

西瓜大丸子汤的博客

笑对人生,傲立寰宇

Just another WordPress.com site

Where On Earth Is Waldo?

A Project By Melanie Coles

the Serious Computer Vision Blog

A blog about computer vision and serious stuff

Cauthy's Blog

paper review...

Cornell Computer Vision Seminar Blog

Blog for CS 7670 - Special Topics in Computer Vision

datarazzi

Life through nerd-colored glasses

Luciana Haill

Brainwaves Augmenting Consciousness

槑烎

1,2,∞

Dr Paul Tennent

and the university of nottingham

turn off the lights, please

A bunch of random, thinned and stateless thoughts around the Web