Trying out a new theme. Leopress was getting old and so I a kinda “theme shopping” now.
So being the type of “computer history” nerd I am I got one of my odd and unusual itches. This time I wanted to poke around with MVS (and it’s correspondng JCL) and TSO. TSO is Time Sharing Option…A multi-user environment, something we tend to take completely for granted. I did this with the help of Hercules which is a System/370, System/390, and z/Architecture emulator for Linux, Windows, and OS/X.
I am torn between showing the whole thing as a wall of shame item or just ranting about an anonymous (open source) user management product. It’s not alone in this sin, I’ve seen the same problem in *expensive* database driven shopping cart and user management apps.
What problem is this that I am speaking of? LACK OF INDEXES. Seriously. If you have a sessions table, and you’re searching for old sessions to expire YOU NEED AN INDEX ON THE TIME COLUMN.
Did a not exactly smooth migration to my new webserver today. Found out that Debian 5/Lenny seems to have completely broken suPHP. It can’t correctly figure out the DocumentRoot anymore for some reason. It complains to error log SoftException in Application.cpp:202: Script “x” resolving to “x” not within configured docroot – except it is so heh. I’ll have to dig into that later. I also have some back end stuff to dig around in and because of that for right now the new webserver is rather quite a bit slower than the old setup.
Yup, it’s SysOp here. I know, it’s been a while but I’ve been busy and there have been a lot of changes. On with the post though!
Well I made the leap to Windows 7 after having to buy a new laptop (long story short the desktop is dead). Upon upgrading to Win7 RC1, as the laptop came with Vista, my Time Capsule disks stopped working with a mysterious username/password error number 86.
Bryan Cantrill over at Sun Fishworks wrote an Excellent Blog Post on why SPECsfs sucks. Go read it. Seriously. You need to. Then start to wonder about all the other “benchmarks” SPEC publishes. The blog post there explains WHY you can’t get affordable storage from big vendors, and why all of it’s performance is so crappy in real world scenarios. They’re all targeting this benchmark as their milestone.
OK I finally gave in and added myself to twitter… It’s very likely to not last very long but i figured I’d give it a try.
So, as some of you might know, I built a new server. Haven’t been able to put it into production mostly because of lack of time. Then when I had time I was noticing strange drive issues. My SATA drives were going online and offline somewhat randomly. This went on for about two months while I took care of my day job and my life. Then it came down to finally figuring out what was going on.
During shutdown
Inode 0000010029fa7bc8: orphan list check failed! Well crap. Guess we’ll be fscking on bootup….this is going to take a while….
/dev/VolGroup00/vz: Inode 80987262, i_blocks is 163148, should be 162952. FIXED. /dev/VolGroup00/vz: Inode 99082779, i_blocks is 11422, should be 11420. FIXED. /dev/VolGroup00/vz: Inode 115564804, i_blocks is 740918, should be 740916. FIXED. /dev/VolGroup00/vz: Inode 136136891 has illegal block(s). /dev/VolGroup00/vz: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) [FAILED] WELL GEE YA THINK!
Let me count the ways, here’s #2 (see the previous post on yum for #1)….
[root@vps0 security]# md5sum access.conf* 1ab9971e4ec0682b89aaff29fea6de9e access.conf 1ab9971e4ec0682b89aaff29fea6de9e access.conf.rpmnew [root@vps0 security]# Why the hell is it creating a .rpmnew file for an IDENTICAL *CONFIG* file? Hell even the (modification) timestamps are identical!
While doing a template fetch onto our Virtuozzo machine. A template fetch really just downloads every configured application (this is different from say EVERY application to create a mirror, it just downloads applications configured, and their dependencies). So it’s running yum in download-only mode.
[root@vps1 private]# vztop vztop - 18:02:45 up 4 days, 17:30, 2 users, load average: 1.29, 1.36, 1.28 Tasks: 372 total, 2 running, 370 sleeping, 0 stopped, 0 zombie Cpu(s): 13.
64GB RAM
2TB DISK
My server’s bigger than youuuurs…. :P!
So, if you’re using Debian 4, be careful to make sure your iSCSI Initiator settings are right, because if your auth settings are wrong, you will have a big box of fail.
Debian GNU/Linux 4.0 web6 ttyS1 web6 login: scsi2 : iSCSI Initiator over TCP/IP BUG: unable to handle kernel paging request at virtual address 00100104 printing eip: f8b916f4 *pde = 00000000 Oops: 0002 [#1] SMP Modules linked in: ib_iser rdma_cm ib_addr ib_cm ib_sa ib_mad ib_core iscsi_tcplibiscsi scsi_transport_iscsi binfmt_misc button ac battery ipv6 autofs4 dummy nfs lockd nfs_acl sunrpc 8021q dm_snapshot dm_mirror dm_mod loop serio_raw shpchp e7xxx_edac psmouse i2c_i801 i2c_core rtc pci_hotplug evdev edac_mc pcspkr ext3 jbd mbcache ide_disk generic qla2xxx e100 piix mii firmware_class scsi_transport_fc uhci_hcd e1000 scsi_mod ide_core usbcore thermal processor fan CPU: 1 EIP: 0060:[<f8b916f4>] Not tainted VLI EFLAGS: 00210282 (2.
Well we’ve now had a few months of full production ZFS usage. We’ve had our first drive failure which exposed the oddities of drive failures under ZFS. It does work REALLY hard to cover them up, so much so that it never really quite gave up on the dead drive until I ran zpool offline on the drive. That said there was NO effect to users at all as far as I can tell, despite the drive producing errors and just generally not responding the only commands suffering were zpool related commands that actually went to access the affected drive directly, overall performance and function of ZFS didn’t degrade.
Sorry but I’m a big fan of Adam Savage, Mythbusters, and Jamie Hyneman, and of course the whole M5 Crew, and Mythbusters Crew. so I just have to embed this here. (And yes of course Kari Byron, who wouldn’t be!)
[Editor note: embedded google video using ANCIENT flash player, lost to time]
Well, the box running dotblag.com, while plenty serviceable, is showing it’s age. I’ve ordered a pretty large machine (just short of $3000 in total parts) and the bits are on their way, woohoo! I’ll be setting it up and burning it in over the next month or two. Once it’s ready dotblag will be moving to it. I’m still not sure exactly how the software’s going to be, but some sort of master/host OS with virtual containers to run stuff.
While it’s not clear exactly who-is-causing what, what is clear is the areca driver tries to de-reference a NULL pointer, this is either because the adapter screws up, or the driver screws up somewhere. The result is a Solaris kernel fault, pointing at the arcmsr driver, and apparently an adapter lockup. It’s not 100% clear what causes this condition. It could be the driver not handling some buffer appropriately, it could be the card sending an error that the driver doesn’t handle.
Ordered two racks, received two racks. One is a rhombus though. The bottom part was caved in. *sigh* sending that one back.
In other news it’s been busy, things are starting to happen for our office move. If the damaged rack is still here tomorrow (looks like it will be) I’ll update this entry with a picture or two of it.
I find myself realizing I’ll have to go probably to a fabric store to get some velcro tape. Why? To attach the RAID battery backup module in a 1U chassis. Why can’t they just include some of this with them?
I’ve been pretty swamped lately so I haven’t had any time to fire off an update. Trust me though, it’s not that there haven’t been update-worthy-goings-on.
I am therefore, .Fail.