Adaptive Consistency

Team: Martynas Šeškas, Chao-Yuan Liang​, David Kloeg, Martin Petrov, Ruxandra Chiujdea​, Sofia Stylianou​
Type: Workshop
Year: 2018

In this workshop we wanted to explore and use “generative pattern making” as a quick way of creating various pixel distributions / images as an input for a (hacked) knitting machine. The knitting machine can either read punch cards (potentially made with laser cutter or pen and paper) or directly get the information via an AYAB-shield (based on Arduino Board). The machine reads “0” and “1” (or black and white) either to change between different yarns or to create specific “knit/hole”-distributions.

One way of generating images for knit patterns or as seed-image for a cellular automata is making use of the way pixel information is stored in images (or also in meshes in grasshopper): Pixels (or the color in mesh vertices) are saved row by row, from left to right with three color values 0-255 (RGB). Through (iterative) pixel-channel-shifting – or remapping of pixel color-values based on their position in the grid – it is possible to create non-random global patterns for each color channel.

After patterns were knitted, we analysed them using sun light simulation (with help of ABB robot arm) in order to find relation between pattern and it’s shadow, and therefore have a controlled way of translating between the two using different yarns and combinations of them. Light analysis was then also taken for further investigation and using machine learning algorithms we tried to predict the shadow of different knits, thus avoiding individual analysis for each piece. The experiment was more exploratory rather than being useful as a tool, mostly due to lack of images used to train ML model.

Adaptive Consistency

Team: Martynas Šeškas, Chao-Yuan Liang​, David Kloeg, Martin Petrov, Ruxandra Chiujdea​, Sofia Stylianou​
Type: Workshop
Year: 2018

In this workshop we wanted to explore and use “generative pattern making” as a quick way of creating various pixel distributions / images as an input for a (hacked) knitting machine. The knitting machine can either read punch cards (potentially made with laser cutter or pen and paper) or directly get the information via an AYAB-shield (based on Arduino Board). The machine reads “0” and “1” (or black and white) either to change between different yarns or to create specific “knit/hole”-distributions.

One way of generating images for knit patterns or as seed-image for a cellular automata is making use of the way pixel information is stored in images (or also in meshes in grasshopper): Pixels (or the color in mesh vertices) are saved row by row, from left to right with three color values 0-255 (RGB). Through (iterative) pixel-channel-shifting – or remapping of pixel color-values based on their position in the grid – it is possible to create non-random global patterns for each color channel.

After patterns were knitted, we analysed them using sun light simulation (with help of ABB robot arm) in order to find relation between pattern and it’s shadow, and therefore have a controlled way of translating between the two using different yarns and combinations of them. Light analysis was then also taken for further investigation and using machine learning algorithms we tried to predict the shadow of different knits, thus avoiding individual analysis for each piece. The experiment was more exploratory rather than being useful as a tool, mostly due to lack of images used to train ML model.