10.04 10.10 292c ACL ActiveDirectory Client DAW Debian LDAP OpenOCD OpenRD OpenWRT OpenWrt Plugcomputer Sheevaplug Uboot Ubuntu Windows Xen about administration agent amd64 analog android annoying antitheism appleid apt apt-get aptitude arbum ardrum arduino arrogance atheism audio author:phaidros author:tee backend backup backupninja bash bashing biology bookmark bookmarks bootloader bsd buchla bug ca cache certificate-authority chemistry chroot cli clone cloud cluster cms concepts console dawkins dcvs deb debian diaspora discus dist-upgrade distributed diy dmraid dna dns docker.io dom0 domU download dpkg drum dumb as a brick duplicity dvcs e-drum electronics facepalm failover fakeraid file filevault firefox flash fossil freebsd frontpanel gate git gnome gnu gpg gpt graceful grub grub-install grub2 guest hackers hardware helpers hercules homeopathy howto ikiwiki image init installer invisible ipfw jack jackd jessie jow kvm last-fm lastbash lenny libportaudio19 license:cc-by-nc-sa lilo linux loadbalance locale locales lowlatency lowpass lucid lvm mailman maverick mavericks mbr mdadm mechanism mechanismus medicine memberOf attribute midi milestone mixbay mixer mixxx modular multiple multiplexer musicplayer naapurisopu nano-howto nat neighbourhoodmesh network networking neurology nginx notation nsfw nvidia openssh openssl oss osx ovh pain parallel patchbay perl person pharmacology piezo proaudio progressbar prosody public pulseaudio puredata putty pv pygrub python qemu quickguide rails random rant ratio recent recovery regex resonant restore rmx rootserver sbf scripting searchterm sensors setup sftp shell-fm smokeparrot snare sources.list sparse squeeze ssh ssh key surgery stick stoopid streaming subsignal.org synth sysctl systemd sysvinit-core tech terminal testing the F thereisnogod toms toxicology tutorial ubuntu ubuntu-server uninstall unix unstable urandom usb v4l version control video virtualbox virtualenv virtualmachine weight loss wheezy workaround xen xmpp zauberlehrling
openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin

.. suddenly the limit is not the CPU anymore, but the bus to the drive carrying the filesystem :)

and when one wants to see some numbers, add pv in between:

openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | pv -pterb > /dev/sdb
Posted on 23 September, 2015 Tags:

the jungle of replacing a failing hard drive in a mac book pro and not wanting yosemite afterwards!

Why? well, after fsck'ing the machine without errors but still having an unbearable slow user experience - the spinning ball takes ~15 minutes until a click to the username in the login manager reacts and no chance to use the system after login, as well as even tab expension in single user mode takes ages - I finally decided that either the hdd is failing, a cable is faulty or the motherboard is dying. Or, before that to check, lets reinstall Mavericks to make sure it is not some strange config issue in the former installed OS.

Why not yosemite? well, default submission of spotlight search requests to Apple and Microsoft? Sure, one can disable that, but what else hidden anti-privacy "features" are there? I do not trust, yet. So Mavericks it shall be!

Uhm, where do I get the installer? Ah yes, Appstore. Wait, no, only if you previously downloaded it with the corresponding AppleID. Didn't. Bummer. So no Mavericks anymore for me?

Ok, Mavericks is free of charge so it might not be illegal to just download it somewhere else. Sure, I go and download a Mavericks image which I can put directly to an USB stick to install on the machine then.

Good. Install runs, takes ages. Actually never finishes m( Alrite, check! replacing hard drive is what we do now!

Open the thing up, easy: just some screws. Ok, the torx for the HDD screws is annoying, but no issue. Let's install that Mavericks again! Takes only 30-40 minutes and is done. Yeah \o/

No, I cannot activate FileVault? Research brings that I should not never ever use HFS Journalling with Capitals .. ok, redo the thing, another HFS. Another hour, done, fine.

F***. No FileVault again? Missing recovery maybe? Ok, lets get that "Recovery Partition Creator" which is recommended for that case. Run it. Ok, it takes ages, does not respond, is obviously doing nothing. Grrr .. Tried to run it in single user mode console, no chance. That thing seem broken for me.

WTF!

Ok, once having installed Mavericks, the interwebbs suggests to just reinstall it from the downloaded Installer again - to get the recovery partition (re-)created. Over itself. Well, ok. Lets click that installer I have on the stick!

I see, it wants to connect to the AppStore to verify the machine with Apple .. wait whut? Ah, nevermind, lets just get over this!

On a fresh OSX you need an AppleID to use the AppStore. Ok, do it. Wait, needs credit card? Are you nuts? Why? Okok, after another hour researching I find a way to create an AppleID without the need of credit card credentials (by downloading a "free" app in the AppStore and creating an AppleID just from that dialog there, then one gets offered the "None" button for payment method ..) .. are you serious?!

Kewl, I am now in the AppStore after clicking my installer .. and .. yeah. "The product you are looking for is not available at the moment." Nice, I created 3 AppleIDs, wasted numerous email aliases to learn that I cannot install Mavericks, again?

Alrite. Maybe that installer I downloaded somewhere is faulty. I try to get an original one. That maybe should create the recovery partition upon install (remember: to get file vault running!!)! I ask around, look through all Macs I have around and - voila - I actually do find a already downloaded Mavericks Installer of ~6GB size.

I put that on the stick with: sudo /Applications/Install\ OS\ X\ Mavericks.app/Contents/Resources/createinstallmedia \ --volume /Volumes/MyStick \ --applicationpath /Applications/Install\ OS\ X\ Mavericks.app \ --nointeraction

Takes an hour, I can boot it. I install it. It says "8 minutes to go", not very long, uh? Reboots. Lands in the installer - again, says "50 minutes to go". Okaaaayyyyyyyyy .. at least it does something.

End of journey: it installed. The former created users are still there. So what did it actually do? Not a complete fresh install for sure. Nevermind. Lets see system preferences: activate FileVault .. aaaand ?!

it actually does it. reboots. tells me it needs another 6 hours until done with the ecryption! wow! only 2 days wasted! to re-install a Mavericks with a useable recovery partition to actually use file vault!

thanks for listening. dafuq!?

slightly related, did you dry these in the rain forest?!?: http://www.youtube.com/watch?v=Sv5iEK-IEzw

Posted on 26 November, 2014 Tags:

as root:

curl -sSL https://get.docker.io/ | sudo sh
usermod -aG docker youruser

done!

do not depend on distribution packages .. ;_)

Posted on 4 September, 2014 Tags:

to get your own CA (easily created with tinyca) into your debian/ubuntu system as an accepted CA do this:

actually it seems, the update-ca-certificates script accepts only files named .crt .. so place your .pem there named .crt .. :D

:# cp your-ca.crt /usr/local/share/ca-certificates
:# update-ca-certificates

voila, it is now imported :)

you can check what the above script actually did with:

:# ls -al /etc/ssl/certs/ | grep your-ca

test driving systemd on jessie

recently, i saw updates to udev were failing on a jessie vm. the reason was that the vm was running on an older kernel that for other reasons could not be upgraded. as such, /dev did not contain a lot of entries which successfully prevented the vm from booting. manually mounting/chrooting its root file system in the dom0 let me uninstall systemd-* and get back to sysvinit-core. for lack of /dev entries i had to resort to installing makedev which got the box running again.

and yes, i probably could read https://wiki.debian.org/Xen#Error_.22unknown_compression_format.22 und run https://github.com/torvalds/linux/blob/master/scripts/extract-vmlinux but upgrading the dom0 will fix it anyway.

it was time to also dist-upgrade a laptop that had initially been installed with squeeze, back then running on different hardware. now this laptop has multiarch running, with A LOT of installed packages (roughly 5000). Many of the installed services were just installed for testing and disabled to start up by renaming their links in /etc/rc[1-5].d from uppercase S10_foobar to s10_foobar which kept on working when i dist-upgraded the box to wheezy.

after the dist-upgrade to jessie, i found ALL software installed to be running. not too much of a biggie, i set about to disable these services again using systemctl and got most of the stuff disabled. samba still refused to be turned off though. to be fair, some stuff that had been broken since the upgrade to wheezy suddenly started working again, ie plugging usb sticks in and being able to mount usb sticks via gui or slamming the lid of the laptop would actually make it suspend.

having stuff that was not supposed to be running is one thing. booting seemed to a bit quicker, but for not running a display manager i still had to wait until all of the virtual consoles came up. what put me off though was that i was suddenly experiencing shutdown times like i was running windows, shutdown took from anywhere about 30 seconds to several minutes. debugging was a bit difficult as the syslogd got stopped very early.

ok, this is probably very beta and as i am running testing, it probably is normal to encounter a few glitches. maybe it will get better some day.

ok again, how am i going to fix it ? let's revert to the old sysvinit.

oh.

doh.

sigh ok, fuck it, whatever.

deinstalling systemd means some of the gnome apps will have to be deleted. fortunately this box runs xfce. goodbye, aptdaemon brasero colord gconf-editor gnome-sushi gvfs gvfs-backends gvfs-daemons gvfs-fuse hplip nautilus nautilus-sendto packagekit packagekit-tools policykit-1 policykit-1-gnome printer-driver-postscript-hp udisks2, hoping to be able to install you some day again.

ah, the resolution of the 2nd monitor is not kept anymore and i have to manually set it each time X is started. hm. last time i got it fixed by briefly installing a display manager. lets try gdm3 or lightdm or so.

doh. both rely on systemd and i can't have them running with sysvinit-core. WTF ? the universal operating system is denying choices ?

maybe the project maintainers should not so aggressively try to impose their software onto users. now if i need a box that has to run xyz i am forced to having systemd on it ? this is getting ridiculous. The Depends- and Conflicts- fields of some packages seem deliberately fucked to give people no other choice than systemd.

this sucks.

to summarize,

systemd needs a new kernel and the system will not work with an older one. it does a lot and as usual, the development is quicker than the documentation being written. it also acts as a service monitor.

this concept does not look like unix, it looks like redmond. it is not small and beautiful, but a huge chunk of functionality with a lot of different things it is supposed to do.

being so intrusive to make it mandatory if one wants to run ie gnome apps sucks bonkers.

this does not feel like adding an alternative, it feels more like having constraints stuffed down ones throat while deliberately removing alternative choices.

Update:

i now have the following in /etc/apt/preferences.d/no-systemd :

Package: systemd-sysv
Pin: release o=Debian
Pin-Priority: -1

as I fight all day with gpt, parted, debian-installer, dmraid,
partition types and mdadm, I just put some useful snippets here.
AFTER all this use the debian-installer or which distro you ever use,
as those installers tend to mess up here big time, resulting in non-booting sytems.

get rid of former fakeraid metadata on your drives:

dmraid -rE

create a useful layout on sda

mklabel gpt
mkpart non-fs 1 2     # leave some space before in case one day a slightly smaller drive needs to sit in the raid :)
mkpart boot 2 1000
mkpart system 1000 -1 # leave some space after in case one day a slightly smaller drive needs to sit in the raid ..
set 1 bios_grub on
set 2 raid on
set 3 raid on

copy the gpt partition table from on drive to another
WATCH OUT: FROM /dev/sdX TO /dev/sdY .. do not mess this up ..

sgdisk -R=/dev/sdY /dev/sdX
sgdisk -G /dev/sdY # this randomizes the GUID on disk & partitions
Posted on 21 March, 2014 Tags:

sometimes one could go crazy, took me ages to figure, that's why it is here now:

in case the installer doesn't see the freshly created fakeraid
(e.g. on hp proliant microserver ..),
put that to your installer bootloader line:

dmraid=true

this can save a day ..

setfacl -R -m group:groupname:rw /path/to/dir
find /path/to/dir -type d -exec setfacl -m group:groupname:rwx {} \;
find /path/to/dir -type d -exec setfacl -m default:groupname:rwx {} \;
Posted on 10 March, 2014 Tags:

one just creates the targetlv in the same size as the srclv (8g in this case),
and when telling pv this size for the transfer it shows all the bells and
whistles it can .. even ETA :_)

dd if=/dev/srcvg/srclv | \
pv -p -e -t  -r -a -s 8G | \
ssh -i /home/user/.ssh/sshprivatekey -C -c arcfour -l root 172.x.y.z 'dd of=/dev/targetvg/targetlv'

After getting to build two resonant lowpass gates, a design from old don buchla put into a pcb done by thomas white, i decided to make my own euro front panels for it and share them.


the files are available at github

(c) 1904-2038 wotwot

cc-by-nc-sa

hercules rmx weight reduction

somewhere i read about the hercules rmx having a piece of steel inside that adds to its total weight. removing it would make it easier to transport to gigs, i thought.

of course, i removed all screws to disassemble the enclosure which resulted in some parts bouncing around, so i really had to go on with my objective.

removing the steel plate was then quite simple. i even managed to reassemble it without having some parts left over. to do it again, it makes sense to only remove some screws and leave at least four of them in place: the only screws on the top side that need to be removed are the six long screws that hold top and bottom case together.

to just remove the piece of steel, it is sufficient to remove the bottom lid only which means removing all screws on the sides and the bottom and the aforementioned six on the left and right side from the top. removing the aluminium cover is trickier but not necessary.

hercules rmx top screws

Posted on 29 June, 2012 Tags:

think big is dead

big is the old old


even when thought locally.
bigger, stronger, louder, and longer might trigger lowlevel patterns but small is as underrated as analog descriptions of great size are overhyped. neither of them is more important than the other.
while it might still be desirable to maintain an illusion of an overview of reality,
rather than thinking of something in always the biggest possible Point of View, it makes sense to look at chaos and remember that all models derived from an approximation of "it/nature/dog/.." are never sufficient to thoroughly describe a more complex system and any focusing on the next big "solution" (hey, you're free to choose your own reality after all) will often lead to a system reacting in a bolder way than necessary.

contrary to dawkins, freedom exists, but it often requires almost machine-like precision to go about ones way knowing that exerting your freedom to leave your own path is seldom a good idea.

to repeat an older meme:

small is beautiful

interfering into systems to "improve them" is the beginners behaviour that at best resembles the dreams of the wizards apprentice.
the cake is a lie.
the cake factory is a bigger lie.
abstractions do not exist. also, closed systems don't either.

.. or even ignore that it is xen, basically this command moves the content of an logical volume to another server, reducing bandwidth using gzip compression.

prerequisites:

  • create lvm on target machine
  • install "pv" on src machine

run

root@src:# dd if=/dev/vg/disk bs=4096 | pv | gzip -1 | ssh -p2222 targethost.org "gzip -dc | dd of=/dev/vg/disk"

on your target machine you may want to:

  • resize2fs if the target lvm is larger
  • fsck.extX /dev/vg/disk on the target machine
  • edit the domU.cfg according to your target machine
  • mount /dev/vg/disk /mnt && chroot /mnt ** edit network, hostname, hosts
Posted on 21 February, 2012 Tags:

electronics

some electronics documentation:

electronics-tutorials.ws

list of IC pinouts

edge detectors

circuit analysis

Posted on 1 February, 2012 Tags:

patchbay

a patchbay might seem a bit like ye olde telephone switchboard but can actually do some neat things. here's some cablemonkey stuff:

the thing looks like this:

basically, this box contains 2 rows of 24 TRS sockets each, on front and back, equalling 96 ways to sink your beloved cables.

there are some terms that are sometimes used differently: normalled, half-normalled, open, paralleled, split and isolated. local copy: show

Patchbays (also mixbays) are very simple, once you understand their purpose. They let you easily change the way your recording studio is connected, and to easily restore your standard operating methods just by removing all of the plugs from the patchbay. This means that the patchbay must have some way of remembering what your standard operating methods are.

A standard patchbay is divided into a number of columns of pairs of jacks, each one containing one patch point. Usually a patch point consists of an output from one device and an input to another device. How they are connected depends on how you normally use your studio. With this in mind, there are four different ways patch points can be connected. Notice that the following diagrams show all combinations of jacks being inserted or removed from the front panel.

OPEN

The open configuration never makes a connection from the top jacks to the bottom jacks. Notice how the two circuits are always kept separate.

This is useful for connecting a normally unused effect to the patchbay. The bottom front panel jack becomes the send to the effect and the top jack becomes the return from the effect.

Examples: effect boxes, isolated tape machines Patchbay open configuration

NORMALLED

The normalled configuration makes a connection from the top jacks to the bottom jacks whenever no plugs are inserted into either front panel jack. Notice how inserting a plug in either front panel jack breaks the connection between the top and bottom circuits.

This is useful for connecting a source that should not have more than one load, such as a dynamic mic. The mic comes into the back of the top jacks and the feed to the preamp is at the bottom. Inserting a plug in the top front jacks diverts the mic signal for use elsewhere, while preventing the mic from being loaded down. Inserting a plug into the bottom jack allows a different signal to feed the preamp.

By using both jacks, you can insert a mic-level effect between the mic and the preamp.

Examples: microphones, high impedance outputs Patchbay open configuration

HALF-NORMALLED

The half-normalled configuration makes a connection from the top jacks to the bottom jacks whenever no plug is inserted into the bottom front panel jack. Notice how inserting a plug in the bottom front panel jack breaks the connection between the top and bottom circuits, but inserting a plug in the top front panel jack does not.

This is useful for connecting a normal signal flow from one piece of equipment to another, while allowing the connection to be tapped off of or replaced if needed. Inserting a plug in the top front jacks taps the signal for use elsewhere while letting the normal connection still pass signal. Inserting a plug into the bottom jack allows substituting a different signal while removing the normal signal flow.

By using both jacks, you can insert an effect into the signal path.

Examples: mixer to monitor amp, direct out to recorder in Patchbay open configuration

PARALLEL

The parallel configuration always makes a connection from the top jacks to the bottom jacks. Notice how the two circuits are kept together, and that both front panel jacks are outputs.

This is useful for connecting an output, which is normally connected to one input, to several different inputs at once. Both jacks can send the signal to places where it is needed.

Examples: mixer submaster outputs, monitor feeds, tape duplication tap points Patchbay open configuration

Note that balanced patchbays have a second set of connections on each patch point for the Ring terminal, which are wired identically to the connections for the Tip terminal that are shown in the diagram. But before the TRS plug was developed, paired plugs were made with one handle, so they fit into two adjacent patch points for balanced signals. Some of these are still around.

Patchbays are now available that have switches on each patch point, to select whether the patch point is Open, Normalled, Half-normalled, or Parallel. Usually the patchbay must be removed from the rack to change the switches.

USING THE PATCHBAY

For most studio patching, two setups are used most often:

The first setup is the normal audio chain. For this setup, the output of each component in an audio chain is brought to the rear input of one patch point. The input of the next component in the chain is connected to that patch point's rear output. The patch point is set up as Half-normalled. The normal connection is maintained whenever plugs are not inserted into front jacks of the patch pair.

Inserting a plug in the upper front panel jack allows you to split the signal off in two directions.

Inserting plugs in both front jacks allows you to insert another component in the chain.

By inserting a cable in the front output jack of one patch point, and the front input jack of the next patch point downstream, you can remove a component from the audio chain. You can then connect cables to the remaining jacks of those patch points and use the removed component somewhere else (nifty use!).

The second setup is the isolated component. Bring its output to the top jack on the rear, and its input to the bottom jack on the rear of the same patch point. Set the patch point up as Open. This component is disconnected until needed, but takes up only one patch point, rather than the two that would otherwise be used.

Patchbays can make your studio life easier, by keeping you from having to reach around behind racks to reconnect equipment frequently. They also make it super-easy to restore your most-often used configuration. All you do is pull all of the patchcords out of the front panel, and you are back to standard operation.

hide

connections might be different in equipment, ymmv.

each top and bottom socket on it would be on a little PCB, with one of the sockets in gray and the rest in black. it can also be rotated and put back into the patchbay.

 



 in theory,

these connections can be plugged in with whatever.

in else,

very often an approach is taken that looks like

the top sockets are outputs like from a subgroup, fx, line or soundcard out,
the bottom ones are inputs like for a microphone or other sound sources.


whenever a plug is inserted into the gray socket, top and bottom row will be isolated.

the idea is to use the back side of the patch- or mixbay to connect the default connections. other setups can flexibly be patched, and just unplugging the front cables will reset the connections to default.

insert cables

can be connected to the back and left disconnected at the front until some fx needs plugging in or directly used as input.

what it is really about

the entire thing gets clearer looking at the electronic component that is responsible for the normalization: a normalising socket.


this mono socket has its tip normalised to the shunt, and this connection will be broken when a plug is inserted.

a patchbay like this however has sockets where tip, ring and sleeve are having normalising shunts:

normalising_socket_photo.png

Posted on 26 December, 2011 Tags:

fossil, a dscm

fossil is an open source distributed version control system with some features like

  • distributed bug tracking
  • distributed wiki
  • web interface
  • and moar

like a detailed overview cares to explain.

Posted on 25 December, 2011 Tags:

most annoying feature ever!

as with the increasing version number of the ffox less and less plugins do work, all ffox
DNS cleaning extensions didn't install with my recent version 9.something :/

so, here comes the plain way to do it:

  • about:config
  • yes, I'll be careful
  • new -> integer
  • network.dnsCacheExpiration as name and 0 as value
  • new -> integer
  • network.dnsCacheEntries as name and 0 as integer value

*phew

apropos: did you know about:robots?

try it, LOL :_)

Posted on 21 December, 2011 Tags:

recent findings

streaming & radio tools

Posted on 27 November, 2011 Tags:

Assuming:

  • you have running prosody.im XMPP server
  • you have the telnet module enabled
  • you added modules and/or changed configs
  • you don't want to restart the service and force-disconnect the connected users

Here is what you do:

telnet localhost 5582

now we want to load a module

module:load("muc_log", "conference.example.com")

| Loaded for conference.example.com
| OK: Module loaded onto 1 host

if loaded first time just do

module:load("muc_log_http")

more trickier, when you need to clean up for a module reload

httpserver.new.http_servers[5290].handlers["muc_log"] = nil

now browser to http://conference.example.com:5290/muc_log/ (trailing slash needed!)

now: profit!

Posted on 2 August, 2011 Tags:

ardrum

Recently i started making my own drumset. as good e-drums are not quite affordable to me, i set out to build my own with some design goals:

  • the individual drums should be sensitive to the place where they get hit
  • i wanted the set to be able to play samples as well as to trigger and influence sound synthesis
  • it should teach me about arduinos and other microcontrollers
  • the sensor input should be processed in an arduino and sent to pure data
  • it should be sort of affordable
  • standard parts should be used whenever possible

i came back from work where we checked my 20 yr old piezos with an oscilloscope, they have their first peak between 8.88 and 9ms after being hit. less than 10ms latency should work. among the other drums, these piezos go into the snare, too.

parts

electronics

hid

practice cymbals, pads and other materials

software

when its in a releaseable shape, consists of
- arduino sketch
- pure data patches

and of course, schematics too.

noisefoc

yesterday we built a noisefoc, a little NAND-based square wave synthesizer.

noisefoc-mini

the sounds it makes can be heard in a little video

Posted on 29 May, 2011 Tags:

putty and ssh public key authentication

hardly ever did i bother to use putty as a normal terminal usually does what i want. unfortunately, there is an "operating system" whose name i will not mention that considers itself so different that the variety of available terminals for it is pretty low.

needing to tunnel an audio stream, i finally engaged in the quest for working ssh public key authentication using putty on such a system. needless to say, i read Chapter 8: Using public keys for SSH authentication and Chapter 9: Using Pageant for authentication of its documentation.

also needless to say, i couldn't have imagined how many pitfalls it contains. on unix, generating a key pair and sending the public key over is something that does not take more than a minute.

putty saved the generated keys somewhere, with the private key named .ppk and the public named whatever. it also offers to export something into openssh format which was what i wanted. i was nevertheless amazed that this file only contains the private key.

the public key that is needed to be put into server:.ssh/authorized_keys looked like some other rsa key that are used for ssl certificates but not very much like the keys that are normally stored in .ssh/authorized_keys. there is something that will output this format that i found on unix but not yet on other platforms, -O public-openssh.

remedy:

edit a copy of this public key file with a text editor, delete all the boilerplate stuff, write ssh-rsa / ssh-dsa on the beginning of the line with the actual key, join all further lines of it to just one, append something like user@host at the end and delete all the rest. the file is now ready to be appended to the other keys in authorized_keys.

the documentation on puttygen on unix has options to directly output this format using -O public-openssh . in this case i did it manually, from this:

---- BEGIN SSH2 PUBLIC KEY ----
Comment: "rsa-key-20110410"
AAAAB3NzaC1yc2EAAAABJQAAAIEAqW/3hc9LgrNfYHFdBU37AM45s0OLfDJ1isvh
V5Ug4h0d/YzY8uzjRcZU5FrUz3NAsLlkgZck7M3Dg61/6oSZRDYAOZwsWJWhv+bx
uBY6Y2JEiFTZP1vIJoaj2v3nJz07w5n6ZtueCtodUWLi8MHotC6+zsXEmCbhI1RR
7u/8ork=
---- END SSH2 PUBLIC KEY ----

into this:

ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAqW/3hc9LgrNfYHFdBU37AM45s0OLfDJ1isvhV5Ug4h0d/YzY8uzjRcZU5FrUz3NAsLlkgZck7M3Dg61/6oSZRDYAOZwsWJWhv+bxuBY6Y2JEiFTZP1vIJoaj2v3nJz07w5n6ZtueCtodUWLi8MHotC6+zsXEmCbhI1RR7u/8ork= user@host

run:

start pageant. it will hang out in the system tray and private keys can be read into it and being decrypted at which point it workedforme(tm).

Posted on 10 April, 2011 Tags:

assuming you have a local git branch called my_diaspora with custom settings and changes (skins, header, footer etc) and serve that via e.g. nginx to from localhost:3000 to the outer world.

git checkout master
git pull
git checkout my_diaspora
git pull origin master
bundle install (you will probably be asked for a sudo or root password)
rake spec

if neccessary:

rake db:migrate && rake spec 

finally fire it up:

./script/server

now: profit!

hm, on multiple request, here is the nginx conf I use for diaspora:

server {
    listen 443;
    ssl on;
    ssl_certificate /path/to/my/cert.crt;
    ssl_certificate_key /path/to/my/cert.key;

    server_name diaspora.subsignal.org;

    access_log /var/log/nginx/nginx.access.log;
    error_log /var/log/nginx/nginx.error.log;

    root /path/to/my/diaspora/checkout/diaspora/public;

    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;

    location / {
            client_max_body_size 4M;
            client_body_buffer_size 128K;
            if (-f $request_filename/index.html) {
                    rewrite (.*) $1/index.html break;
            }
            if (-f $request_filename.html) {
                    rewrite (.*) $1.html break;
            }
            if (!-f $request_filename) {
                    proxy_pass http://diaspora;
                    break;
            }
    }
    error_page 500 502 503 504 /50x.html;
            location = /50x.html {
            root html;
    }

}

upstream diaspora {
    server 127.0.0.1:3000;
}

server {
    listen 80;
    server_name diaspora.subsignal.org;
    access_log /var/log/nginx/nginx.access.log;
    error_log /var/log/nginx/nginx.error.log;
    root /path/to/my/diaspora/checkout/diaspora/public;
    rewrite      ^(.*) https://diaspora.subsignal.org$1 permanent;
}

ok, but now profit!

Posted on 8 April, 2011 Tags:

As the previous post with bridged Xen setup was not good to make the LoadBalance IP running,
I again fiddled with routed setup and the things I learned from my friend TH. And, against
all the warnings in propably older OVH docs I just use now dev ethX as default gw in my
routing table(s). (I think before they introduced the virtual MAC they always blocked unknown
MACs (and therefore also random ones from virtualisation as well) on their switches. This
seems not to be the case anymore).

first, get two of your 3 failover IPs and attach them to your rootserver.

second, get at least a second rootserver and attach a LoadBalancing infrastructure to both
of them and then enable only your test candidate for now.

get your Xen in routed mode (xend-config.sxp):

(network-script network-route)
(vif-script     vif-route)

don't forget to restart xend!

and in domU /etc/network/interfaces easy as this: (where 2.2.2.254) is the default gw of your dom0)

auto lo
iface lo inet loopback

# fixed IP
auto eth0
iface eth0 inet static
    address 1.1.1.1
    netmask 255.255.255.255
    broadcast 1.1.1.1
    post-up /sbin/ip route add 2.2.2.2 dev eth0
    post-up /sbin/ip route add default dev eth0
    post-down /sbin/ip route del 2.2.2.2 dev eth0
    post-down /sbin/ip route del default dev eth0

# moving failover IP
auto eth1
iface eth1 inet static
    address 1.1.1.2
    netmask 255.255.255.255
    broadcast 1.1.1.2
    post-up /sbin/ip rule  add from 1.1.1.2 lookup 100
    post-up /sbin/ip route add 2.2.2.2 dev eth1 table 100
    post-up /sbin/ip route add default dev eth1 table 100
    post-down /sbin/ip rule  del from 1.1.1.2 lookup 100
    post-down /sbin/ip route del 2.2.2.2 dev eth1 table 100
    post-down /sbin/ip route del default dev eth1 table 100

# moving loadbalance IP
auto eth2
iface eth2 inet static
    address 1.1.1.3
    netmask 255.255.255.255
    broadcast 1.1.1.3
    post-up /sbin/ip rule add from 1.1.1.3 lookup 200
    post-up /sbin/ip route add 2.2.2.2 dev eth2 table 200
    post-up /sbin/ip route add default dev eth2 table 200
    post-down /sbin/ip rule  del from 1.1.1.3 lookup 200
    post-down /sbin/ip route del 2.2.2.2 dev eth2 table 200
    post-down /sbin/ip route del default dev eth2 table 200

now: profit!

Getting new sh*t around the block, having new roots at OVH, cool offers, nice machines, decent services, okay price, worst UI for management webinterface ever :_)

first, get two of your 3 failover IPs and attach them to your rootserver.

to get xen bridged running you need to create virtual mac addresses in the ovh manager. assign those to the vif section of the domU config:

vif         = [ 'ip=1.3.3.7,mac=02:00:00:00:00:01,vifname=vif.serv01',
                'ip=1.3.3.8,mac=02:00:00:00:00:02,vifname=vif.serv02' ]

use in xend-config.sxp

(network-script 'network-bridge antispoof=yes')
(vif-script vif-bridge)

and in domU /etc/network/interfaces easy as this:

auto eth0
iface eth0 inet static
    address 1.3.3.7
    netmask 255.255.255.255
    broadcast 1.3.3.7
    post-up /sbin/ip route add default dev eth0

auto eth1
iface eth1 inet static
    address 1.3.3.8
    netmask 255.255.255.255
    broadcast 1.3.3.8
    post-up /sbin/ip rule add from 1.3.3.8 lookup 100
    post-up /sbin/ip route add default dev eth1 table 100

Using a binduser with password to read from LDAP/AD is common. Using the memberOf=Group attribute for authing is common, too. Both in combination can be a major fuckup: when your binduser is seeing the whole LDAP tree, except the memberOf attributes. Ok, rewrite your authing. Ok, do that once, twice, more often .. for every single service changing ootb auth to something non common. No no no, I don't buy that.

Well, I searched for ages, but the I found a vague hint, that enabling

"Pre Windows 2000 Compatibility" 

might help. An really, this info is digged up from the bottoms of the interwebbs. Golden needle in a haystack!

Enable the darn stopid named checkbox, and hey, finally you binduser can read what you binduser should be able to read anyways.

screwing over a fucked bootloader

problem:

 u got grub, lilo or something in your partitions but the shit doesnt boot

solution:

 fuck the harddisks MBR over to get a bootmenu that boots any fuckin partition, regardless of whats inside

how-to-repeat:


DISCLAIMER: if you wipe your harddisk(s) trying this its your dogdamn own fault.
also, dos partition labels only. no idea if this works on gpt.

NOTE: this snippet was almost finished but got fucked up on the way. it prolly sounded nicer before.

entering:

some FreeBSD bootmedium (ie. a freesbie image if that works for you or maybe something more recent: a CD, Stick etc)

boot the thing and get root.

make your mind up about which disk you want to replace the MBR of.
if there is just one disk in the box, and this is a PATA disk connected as master on the first bus, the bios will likely call it 0x80 (0x81 for the 2nd etc.).
also, in this case the os will likely call it ad0.

knowing the disks name in terms of bios and OS numbering will help in hitting the right disk.

making the mbr writable:

  sysctl kern.geom.debugflags=0x10

write a new mbr without affecting the partitions on it:

  boot0cfg -B -v -d 0x80 ad0

if everything went well, on next reboot you'll be greeted with a little chooser for booting.

Posted on 28 February, 2011 Tags:

I was wondering if my downspeed was so little, but after switching debian mirrors
I really wondered. Well, to make it clear, the limitation was not on my side, seems
somewhere server side. so to cirumevent that and have reasonable download speed,
just pipe the downloads to wget and do them parallely :d

cd /var/cache/apt/archives/ 
apt-get -y --print-uris install $x | egrep -o -e "http://[^\']+" | xargs -l3 -P5 wget -nv
apt-get -y install $x

works as well with dist-upgrade!

always need this one, as one forgets to ez'ly:

grub-install --recheck --root-directory=/mnt/debinst /dev/sda

world domination is close!!eleven

Posted on 4 February, 2011 Tags:

apparently there is no #~$ python setup.py uninstall
so, one has to workaround this.

first "reinstall" and record:

python setup.py install --record files.txt

last just delete the foo:

cat files.txt | xargs rm -rf
Posted on 2 February, 2011 Tags:
wget http://dl.opticaldelusion.org/sbf_flash
chmod +x sbf_flash
./sbf_flash filename.sbf

Thats it! FTW. LOL ..

Well, a bit --verbose look here [1] and here [2].

[1] http://www.nodch.de/howto-sbf-files-unter-linux-flashen/2204 (German)
[2] http://blog.opticaldelusion.org/2010/05/sbfflash.html

now, profit!

Posted on 20 January, 2011 Tags:

just to remind me later :_)

duplicity restore \
--scp-command 'scp -o IdentityFile=/home/phaidros/.ssh/id_rsa_duplicity' \ 
--sftp-command 'sftp -o IdentityFile=/home/phaidros/.ssh/id_rsa_duplicity' \
--file-to-restore path/to/folder scp://user@host.mybackup.de/path/to/backup/ \
localfolderforrestoredfiles

now, profit!

:_)

Posted on 16 January, 2011 Tags:

pulseaudio and oss_emu in ubuntu maverick and maybe natty

i just stumbled across this: ubuntu removed the oss emulation from the maverick kernel without a working alternative.

hell, why would i care ? v4l and some applications that depend on it come to mind. also, the Brooktree 87x chipsets will not be working as they should.

to put this straight: i am using linux since the early 90ies, and a couple of years ago, when esoteric stuff like tun/tap devices and so on became part of the stock kernels of debian and ubuntu, i was very happy not having to build my own kernels anymore.

unfortunately, some versions ago, ubuntu decided to go for pulseaudio. while this might be good for normal users that do not want any other audio system, as all they use is an audio/video player and its plugins for a browser, it has not been a nice experience for people wanting ie. jack or some other low latency audio system. adapting pulseaudio to just care for a musicplayer and browser and going out of the way was almost impossible back then. nowadays, a handful of lines for asound.conf might fix it.

nevertheless, the easy way was to just dpkg --purge pulseaudio from the system and be done with it.

the question remaining is: how will i be able to do video installations with this ? i guess the easy way might be using debian instead, this will hopefully take longer until things break.

fabric,

a tool with a similar purpose like chef, but instead of running on rails this is python that can run in a virtualenv.

its focus is mostly on remote scripting in python. this includes shell access and more over ssh.

Installation like on the Fabric Website, some template data from the Fabric Wiki, recipes and use_cases.

examples: fabrics own fabfile

example usage:
"could you let a script run that checks for free space, and maybe delete some logfiles on all production servers, please ?"

Log Bomb

A recommended read on this: Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip, IPython and virtualenv and Fabricate Your Way to Better Deployment

we waited ages for that ..

but now the xen-tools are back to Ubuntu Linux via Launchpad PPA!
See here: https://launchpad.net/~xtaran/+archive/xen-tools/

or better:

apt-add-repository ppa:xtaran/xen-tools

now, profit!

:_)

Posted on 2 November, 2010 Tags:

stumbling over the little ones ..

well, lenny as xen dom0 is nice and handy but lacks recent kernels, as in:
supporting newer OSs, like ubuntu > karmic.

so, when updating from hardy to lucid on the subsignal.org webservers I
ran into the problem of a boot always falling back into the root prompt,
because of some

mountall:mountall.c:2938: Assertion failed in main: udev_monitor = 
udev_monitor_new_from_netlink (udev, “udev”)
init: mountall main process (721) killed by ABRT signal

so, logically one follows the path to the future and migrates to pygrub. besides the obvious symlink

/usr/bin/pygrub -> /usr/lib/xen-3.2-1/bin/pygrub

then, get a kernel:

# aptitude install linux-image-virtual

and edit the menu.lst:

# nano /boot/grub/menu.lst 

default         0
timeout         5
title           webfoo, aka wiki.openwrt.org :)
root            (hd0,0)
kernel          /boot/vmlinuz-server root=/dev/xvda2 ro splash
initrd          /boot/initrd.img-server 
boot

but there is still one other real strange occurance of entropy and misunderstandings
between me and murphy here:

bootloader = "/usr/bin/pygrub"
memory      = '1536'
vcpu        = 4

root        = '/dev/xvda2 ro'
disk        = [
            'phy:/dev/space/webfoo-disk,xvda2,w',
            'phy:/dev/space/webfoo-swap,xvda1,w'
              ]

name        = 'webfoo'
vif         = [ 'ip=xx.xx.xx.xx' ]

on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'

vcpus       = '2'
extra       = 'console=hvc0 xencons=tty1 clocksource=jiffies'

before I found the mistake, the two lines for the disks where in
different order, first xvda1 then xvda2 .. logically pygrub runs
for the first device .. which then was the swap .. d'oh.
so, see the your partition with kernel and grub on is the first one
listed in domU.cfg!

hth :)

Posted on 20 September, 2010 Tags:

life can be so easy ...

phaidros@daddl:~/workspace/openrd/openrd.git$ sudo aptitude install uboot-mkimage
The following NEW packages will be installed:
  uboot-mkimage 
0 packages upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
Need to get 9,876B of archives. After unpacking 57.3kB will be used.
Get:1 http://de.archive.ubuntu.com/ubuntu/ lucid/main uboot-mkimage 0.4build1 [9,876B]
Fetched 9,876B in 0s (61.7kB/s)      
Selecting previously deselected package uboot-mkimage.
(Reading database ... 52121 files and directories currently installed.)
Unpacking uboot-mkimage (from .../uboot-mkimage_0.4build1_i386.deb) ...
Setting up uboot-mkimage (0.4build1) ...

phaidros@daddl:~/workspace/openrd/openrd.git$ make ARCH=arm CROSS_COMPILE=/home/phaidros/workspace/openrd/arm-2010q1/bin/arm-none-linux-gnueabi- uImage
  CHK     include/linux/version.h
  CHK     include/generated/utsrelease.h
make[1]: `include/generated/mach-types.h' is up to date.
  CALL    scripts/checksyscalls.sh
  CHK     include/generated/compile.h
  Kernel: arch/arm/boot/Image is ready
  SHIPPED arch/arm/boot/compressed/lib1funcs.S
  AS      arch/arm/boot/compressed/lib1funcs.o
  LD      arch/arm/boot/compressed/vmlinux
  OBJCOPY arch/arm/boot/zImage
  Kernel: arch/arm/boot/zImage is ready
  UIMAGE  arch/arm/boot/uImage
Image Name:   Linux-2.6.33-rc8-00099-gb4cb3f9
Created:      Wed Aug 25 17:42:47 2010
Image Type:   ARM Linux Kernel Image (uncompressed)
Data Size:    2561572 Bytes = 2501.54 kB = 2.44 MB
Load Address: 0x00008000
Entry Point:  0x00008000
  Image arch/arm/boot/uImage is ready

yippieh, not longer complaining about missing mkimage .. thanks ubuntu (though possibly it came directly from Debian, so thanks to you uys as well !!! )


yay. finally I managed to mangle my OpenRD Client board .. sigh

I had troubles, that the marge of my dev board was never stating the correct
manufacturer ID for the NAND, which made flashing newer Uboot bootloader im-
possible.

This was the error popping up whatever I tried with Uboot/OpenOCD ...

unknown NAND device found, manufacturer id: 0x00 device id: 0x00
probing failed for NAND flash device

It drove me nuts until found a solution [1]. Yay!

Now with a newer Uboot (v3.4.19) I was suddenly able to install Debian on the
SD/MMC. For that I followed the great Howto under [2].

Heh, nice. So it finally boots Debian .. and having the D-I via serial console
in my screen (hint: screen tops minicom a million times: screen /dev/ttyUSB0 115200)
just feels very nice :)

But then I got hot and wanted moar .. MUAHAHA !! .. I finally have put the Debian
install from the SD/MMC onto the internal flash, but not before converting that
flash to UBIFS, following this guide [3] ff.

Heh, so now I have an OpenRD Client, booting Debian on choice from SD/MMC or the
internal flash with either kernel 2.6.32-5-kirkwood from Debian or the Sheevakernel
from [4] with 2.6.35.3 ..

I love it!

Further down I have a list with more good-to-read-in-case-links.

[1] http://code.google.com/p/openrd/issues/detail?id=7
[2] http://www.cyrius.com/debian/kirkwood/openrd/install.html
[3] http://plugcomputer.org/plugwiki/index.php/Installing_Debian_To_Flash#Convert_internal_flash_root_partition_to_UBIFS
[4] http://sheeva.with-linux.com/sheeva




hth :)

Ever wondered why it currently seems so fashionable for hackers to bash on homeopathy ?

well, we got ... numberz ! and to be less boring, put them into some pictures that give a relation.

First of all, lets see for what kinds of people homeopathy actually works and for whom not:

average effectiveness

strange, heh ?

well, theres moar: as demonstrated, it is easy to understand that people only choose the data that is sufficient to prove their claims. in the current homeopathy bashing, most arguments go on about the theoretical possiblility of any effect based on the amount of thinning of a given substance. Calculations involving the amount of molecules in the universe will quite often state that the effective thinning will be in a range that exceeds it.

as paracelsus stated that success alone will tell, this gives quite a different picture:

use only datasets that fit into your mindset

hm... so, really, if someone believes something is completely harmless, then why hit on it ? the winner is quite clear:

who has an advantage

100%

a clear winner at 100 per cent, not mentioning the small fry companies and users that are hurt by it.

To deviate a bit, an analysis of the hacker mindset in relation to alternative medicine seems in order.

As everyone is a computer expert these days, it should be mentioned that most computer people can maintain an overview over complex matter.
It should however also be mentioned that it mostly only applies to their own field of interest and proficiency and horribly fails at other systems.

hackers

Does that mean if they cannot maintain a feeling for a complex system outside of their domain it will equally be impossible to still take them seriously ?

It is about time for some lowkicks.

We shall take a look at their self image and how it deviates from reality.

Taking into account that they have access to a lot of information on the interwebs, this might suggest an onmisciencent belief system coupled with some more illusions:

omniscient beliefs

reality shows a different picture, one of delusion.

This leads us to the difference between life and its skewed image in the mirror of the web:

delusions

For people who are strictly to adhere to the dogma of duality expressed in ones and zeroes, anything fuzzy is an unlikely encounter:

a straight mindset

But what the heck anyway, would you rather get an operation by a computer-illiterate surgeon or a hacker believing he/she knows about medicine ? How often have hackers helped you medically ?

What if conventional medicine fails ? If alternative medicine were totally non-effective, how could it hurt trying an alternative ?

fact is that there is a significant amount of so-called hopeless cases that were solved by "ineffective and unscientific" means.

doh.

To be able to spot a hunch of an explanation, we need to leave the rational battlefield and enter martial arts. This view allows us to pinpoint a fundamental difference between hackers and medics from any field.

The Difference

This clearly gives an indication that a certain energy level needed to administer and receive alternative medical treatment is lacking at our aforementioned subjects.

Life is not a game where Experience Points can be gained by sitting in front of a screen and clicking at the things that pass by.

On the contrary, using computers for several times a day leads to a deprivation in energy that cannot adequately be quenched by pizza:

computers suck

To increase the amount of MediPacks (ie. Health), it makes sense to turn the box off once in a while and stop bitching about stuff one does not have a clue about.

also, starting to use both halves of the brain would be something.

Hugh.

for ages, i had my ssh-agent settings in my .bashrc. with lucid this setup stopped working.

i had to - tell gconftool to not intermingle:

gconftool-2 --set -t bool /apps/gnome-keyring/daemon-components/ssh false

which did not do much. as an ugly hack, i additionally just disabled my own ssh-agent handling if it runs on lucid:
    # ssh-agent stuff
    # broken with ubuntu lucid
    distro=lsb_release -c | sed -e 's/Codename://g' | grep lucid
    if [ ! -z "$distro" ] ; then
                #echo "$distro detected, aborting ssh-agent logic"
                echo $- | grep i > /dev/null
                noninteractive=$?
                if [ "$noninteractive" == 0 ] ; then
                    ssh-add -l >& /dev/null
                    if test $? = 2; then
                         if test -f ~/.agent; then
                                 . ~/.agent
                         fi
                         ssh-add -l >& /dev/null
                         if test $? = 2; then
                                 ssh-agent > ~/.agent
                                 . ~/.agent > /dev/null
                         fi
                    fi
                fi

fi


links: live.gnome about ssh and ssh-agent not forgetting passphrase

Posted on 27 June, 2010 Tags:

linked the aptitude docs ..

.. as I regularly tend to forget, I linked the aptitude docs on http://subsignal.org/aptitude.

the enhanced search option you can find directly here:
The aptitude Search Term Quick Guide !!eleven!

now, use it.
then, fun :)


what is the station url format in shell-fm?

using the nice and fantastic shell-fm player, the last-fm player in your beloved terminal,
you might wanna know which channels you can listen to and preferably in which
format you have to enter them.

All I found over the interwebz are here:

lastfm://user/${user}
lastfm://user/$USER/recommended
lastfm://user/$USER/playlist
lastfm://user/${user}/loved
lastfm://user/${user}/personal
lastfm://usertags/${user}/${usertag}
lastfm://artist/${artist}/similarartists
lastfm://artist/${artist}/fans
lastfm://globaltags/${globaltag}

Hehe, meanwhile crawling through the omniscient dump, I just found another friendly
cli player lastbash (homepage) and wikipedia article. neat.

can haz this channels e.g.:

lastbash "lastfm://globaltags/jungle"
lastbash "lastfm://globaltags/glitch"
lastbash "lastfm://user/phaidros7/neighbours"
lastbash "lastfm://artist/Salmonella Dub/similarartists"

now: fun!

atheism is stoopid!


squeeze still has no jack support in libportaudio an amd64, ..

.. which is needed to use mixxx with jack

# apt-get source portaudio19  
# apt-get build-dep portaudio19

# cd /portaudio19-19+svn20071207/debian

remove ENABLE_JACK = no from line 48
ENABLE_JACK must be explicitely set to "yes"

# nano debian/rules

# dpkg-buildpackage -rfakeroot -b

# cd ../
# dpkg -i *.deb

mixxx still doesn't detect jack for me, stay tuned. Does work now!

(no) fun! (yet)


10 commands for installing latest nvidia on squeeze

# echo <<EOF >> /etc/apt/sources.list \
deb http://ftp.tu-chemnitz.de/pub/linux/debian/debian/ unstable main non-free contrib \
deb-src http://ftp.tu-chemnitz.de/pub/linux/debian/debian/ unstable main non-free contrib \
EOF

# echo 'APT::Default-Release "testing";' >/etc/apt/apt.conf.d/00defaultrelease

# aptitude update
# aptitude install module-assistant nvidia-kernel-common build-essential
# m-a clean nvidia-kernel-source
# m-a purge nvidia-kernel-source
# m-a prepare
# aptitude install nvidia-kernel-source/unstable
# m-a a-i nvidia-kernel-source
# aptitude -t unstable install nvidia-glx nvidia-libvdpau1 nvidia-settings nvidia-libvdpau1-ia32 nvidia-glx-ia32

fun!
Posted on 19 February, 2010 Tags:

install debian from usb stick

# wget http://http.us.debian.org/debian/dists/stable/main/installer-amd64/current/images/hd-media/boot.img.gz
# sudo umount /dev/sdb1
# zcat boot.img.gz > /dev/sdb1

plug stick out / in.

# wget http://ftp.de.debian.org/debian-cd/current/amd64/iso-cd/debian-504-amd64-netinst.iso
# mount

see where stick is mounted.

# cp debian-504-amd64-netinst.iso /media/Debian\ Inst/

happy installing!

Posted on 18 February, 2010 Tags:

OpenWRT on Xen patched, up and running

Thanx to jow and thomas h., we have now working support for Xen in OpenWRT!
see the following changesets for what is done:

So you now can choose x86 as target with Xen as subtarget.
nice. we give back to the connected funk-haeusers!

Posted on 13 February, 2010 Tags:

locale: Cannot set LC_ALL to default locale: No such file or directory

now and then this happenes to me on a new machine. usually I just set
LC_ALL, LC_LANG & LANGUAGE like this

root@gargamel:~# cat /etc/profile.d/locale 
export LANGUAGE = "en_US.UTF-8"
export LC_ALL = "en_US.UTF-8"
export LANG = "en_US.UTF-8"

still it complains about missing default locale. so we hafta do this:

localedef -v -c -i en_US -f UTF-8 en_US.UTF-8 

because usually en_US is probably defined.
we r done. we give back to the connected funk-haeusers!

Posted on 29 January, 2010 Tags:

mkcd - mkdir & cd directly into it

phaidros@42:~/ #$ echo 'mkcd() { mkdir -p "$@" && cd $_; }' >> ~/.bashrc
phaidros@42:~/ #$ . ~/.bashrc
phaidros@42:~/ #$ mkcd /path/to/a/new/folder/
phaidros@42:/path/to/a/new/folder/#$ echo "Well done :) !"

Little, easy, damn useful.
We give back to the connected funk-haeusers!

Posted on 13 January, 2010 Tags:

audiosetup

this is a collection of information i went through to set up a digital audio workstation (DAW) running on linux.

as far as the systems here were concerned, the distros used for the setup were debian squeeze (testing), ubuntu intrepid, karmic and lucid, for other distros take a look at http://wiki.linuxmusicians.com/doku.php?id=linux_multimedia_distro_s.

1. hardware

2. software

3. putting it all together

data rates, bit depth, midi, osc

using it

general reference

audio communication channels
resources linux audio en francais

theory

Posted on 5 January, 2010 Tags:

finding obsolete conffiles

when updating from lenny to squeeze, one will face two major upgrades of the system:

  • grub becomes grub-legacy and is superceeded by grub2

for people not wanting to convert yet as not everything is supported yet, this can be postponed if desired.

  • sys-rc is being accompanied by innserv

a new, faster system for booting (it looks even faster than upstart to me). its upgrade might fail for some packages on the system not being --purged but just --removed.

getting a list of obsolete conffiles is as simple as

dpkg-query -W -f='${Conffiles}\n' | grep obsolete

if you're absolutely sure you dont need any of the old stuff (possibly including your own customisations) you might also do

dpkg --purge $(dpkg -l|awk '/^rc/ {print $2}')

which will simply purge all removed but not purged packages.

Posted on 5 January, 2010 Tags:

List all packages containing the words route or routing in their description:

aptitude search '~drout(e|ing)'

List installed packages that are not official Debian packages:

aptitude search '~S~i!~Odebian'

List packages installed from experimental:

aptitude search ~S~i~Aexperimental

List packages with 'ruby' and 'gtk' in their names:

aptitude search 'ruby gtk'
aptitude search ~nruby~ngtk

List installed packages that depend on bash:

aptitude search ~S~i~Dbash

Purge all packages that have been removed except for their config files:

aptitude purge ~c
Posted on 30 December, 2009 Tags:

atheism is arrogant ;)

Posted on 4 December, 2009 Tags:

xm console not working (blank)

on some xen domUs I recently figured that the xm console didn't come
up. so investigating the issue I found out that you gotta set the
tty in the machine's config like this:

extra       = 'xencons=tty1'

but still I get a blank screen instead of the login. solution is simple,
for systems still having an inittab (eg. debian) look for the following
line:

nano /etc/inittab

1:2345:respawn:/sbin/getty 38400 tty1

for systems which utilize upstart already (ubuntu) look for another file:

nano /etc/event.d/tty1

start on stopped rc2
start on stopped rc3
start on stopped rc4
start on stopped rc5

stop on runlevel 0
stop on runlevel 1
stop on runlevel 6

respawn
exec /sbin/getty 38400 tty1

makes the xm console work on all my machines again.

Posted on 4 December, 2009 Tags:

if you are using virtualbox and seeing this:

This kernel requires the following features not present on the CPU:
0:6
Unable to boot - please use a kernel appropriate for your CPU

or the like, it could be you are trying to run an *buntu server kernel.
just enable PAE/NX for the vm guest. this should solve the issue.

Posted on 10 November, 2009 Tags:

freebsd bootloader repairs et al

to repair ie grub on freebsd (or to overwrite the disk the box is running on, etc..), the system will complain about doing so.

sysctl kern.geom.debugflags=16

will let you shoot yourself in the foot nevertheless (make the mbr writable).

Posted on 8 October, 2009 Tags:

make deb from cpan

simple:

dh-make-perl --build --cpan $module
dpkg -i $module.deb

Posted on 23 June, 2009 Tags:

preparing the guest system

you need to put following

  1. /etc/preinit

    • add "mknod /dev/hvc0 c 229 0"
    • before "exec /sbin/init"
  2. /etc/inittab

    • add "hvc0::askfirst:/bin/ash --login"

in the domU.conf, aside the usual settings:

  • disk = ['tap:aio:/path/to/openwrt-x86-ext2.fs,xvda1,w']
  • root = '/dev/xvda1 rw'
  • extra = "console=hvc0 init=/etc/preinit"
Posted on 22 June, 2009 Tags:

building debs from source

make sure to have deb-src entries in your sources.list[.d]

among the installed packages should be at least

build-essential debhelper fakeroot autoconf automake

add more if necessary (like dh-make, quilt and so on)

get the dependencies right:

sudo apt-get build-dep $package

get the sources and build them:

sudo apt-get -b source $package

for manual adjustments, leave out the -b and in the package source dir, do

fakeroot debian/rules binary

after changing whatever you wanted different.

Posted on 10 June, 2009 Tags:

ikiwiki setup notes

heres a cklist should i create another iki instance. just assuming the vcs would be svn this time.

deciding for

  • one or more webroots: as one repository is rendered into html by ikiwiki --setup ikiwiki-setup it can also be rendered somewhere else with a different setupfile, ie one is a wiki (some ppl call it backend) and another one will be rendered into plain html just to look at. (some ppl call it "cms" but this is way nicer to use than a cms. also, its lightyears faster)

/$webroot/$instance(wiki)
/$webroot/$instance(dumb)

and then

(moar)

hide

  • place for one or more Makefiles and ikiwiki-setup.$instance (no, i won't rename this file anymore)

~/wikis/$instance

  • place for a checkout that can also be edited without the webif

~/wikis/$instance/src

  • $vcs repository location

/wotevr/svn/repo/$instance

  • webserver config
  • urls for all components

http[s]://f.q.dn/iki/$instance
http[s]://f.q.dn/svn/$instance

  • $vcs repo webserver config or not

  • component and setup for the history button (->ie viewvc)

http[s]://f.q.dn/iki/$instance/history
ScriptAlias /viewvc /usr/local/viewvc/bin/cgi

finally,

ikiwiki --setup ikiwiki.setup-$instance
ikiwiki-makerepo svn ~/wikis/$instance/src /wotevr/svn/repos/$instance

done.

Posted on 8 June, 2009 Tags:

as I did, you might also wonder if mailman is capable of running on multiple domain names with a single instance.

yes it is, easy. configure mailman for your default host and for the next ones, just add the following line to /etc/mailman/mm_cfg.py

add_virtualhost('lists.yourdomain.org','lists.yourdomain.org')

add a vhost for your webserver to point http://lists.yourdomain.org to the mailman. nicely you can add now (with the sooperdooper password) new lists via webif.

now the wondering point: your new list will still be listed on the webif of the default mailman host. I tried to patch mailman, as I remembered it was not a feature offered upstream. by reading the code, I just wondered, that there is a comparison alread done if the list belongs to the web url. well, all you need to do is changing the web_page_url variable of your new list of the virtual domain, because by default all new lists belong to - who guessed - default host. easy thing:

bin/withlist -l -r fix_url listname -u lists.yourdomain.org

well, now it just shows up onlt on the webif of the vhost it belongs to, ez, eh?

Posted on 8 June, 2009 Tags:

ubuntu stable sucked, so I just went alpha!

grep -r jaunty /etc/apt/sources.list* | grep -v "^#" | gawk {'print $1'} | sed 's/:.*$//' | xargs sed -i 's/jaunty/karmic/g

set all used apt sources from jaunty to karmic.

sudo aptitude update
sudo aptitude dist-upgrade    

for a t41p (or other harware using Rxxx ATI card) follow the instructions here: https://launchpad.net/~xorg-edgers/+archive/ppa

get your card running. nice, latest DRI, and works. (don't forget to:

aptitude install xserver-xorg-video-ati

)

I then again got troubles with supend/hibernation. so tetzlav pointed me once moar to uswsusp.

sudo aptitude install uswsusp
s2ram -f -p -m

for suspend to ram.

s2disk

for suspend to disk. roxx!

and, well, still splash was buggy. kick it. now I got radeonfb used for 1440x1050 on console

nano /boot/grub/menu.list

add the following to the line with #defoptions (do not remove the '#', remove all other vga= or video= settings)

vga=835 video=radeonfb

so, that to work, you must edit /etc/initramfs/modules. just add

i2c_algo_bit
fb_ddc
radeonfb
fbcon

fb_ddc is used to detect the screen resolution, i2c is needed by it. radeon, of course ..

now get your changes into place

sudo update-grub
sudo update-initramfs -u -k all

reboot, see, suspend, wake up .. \o/ ..

Posted on 6 June, 2009 Tags:

about


this blog features tech tips, sometimes done in the spirit of technical thugging. its name was inspired by stuff that can be done with a FLT.
hoping to be of any practical use, this blog contains info about things someone did to get some stuff done.

if any of the things here inspire you to shoot yourself in the foot, you are welcome to do so but don't come complaining except when trying to help.

you get what you pay for, and it's free.

Posted on 6 June, 2009 Tags: