Just the other day, I got some excellent news: My publication entitled "On the Design of Scalable, Self-Configuring Virtual Networks" was accepted into SuperComputing 09 with an acceptance rate of ~22.6%. Sadly it was shepherded, so I have to do some tweaks to it, but hopefully it won't be too painful. The only thing that may make it painful, is that all I have left is a tex file and none of the original images from the submission. The paper can be found on the sc09 web page:
http://scyourway.nacse.org/conference/view/pap152
Wednesday, June 17, 2009
Tuesday, June 9, 2009
Writing Application Entry Points
A long time ago, I spent a few days working on an option parser for C# tailored around my application, SimpleNode. The name was quite misleading as SimpleNode allowed users to make many nodes, act as user interface into the system, and probably a handful of other behaviors that have long since escaped me. It was largely unwieldly and probably not horribly useful as a learning tool nor very reusable. So I chopped everything out and went with thin applications that were tailored around objects that had simple, specific tasks. This allowed better integration at the cost of dumbing down the command-line interface and having multiple applications that had very similar behavior.
I typically desire to do things right the first time and to not come back later. When I don't get things quite right, I acknowledge the problem but add it to my priority queue, so I may or may not ever get back to it. Recently, we've had an increase of interest in our system along with the a stage 1 completion of my groupvpn, so I felt it fitting to improve the usability of our project. Lucky for me, one my peers in the lab is very interested in technologies and was able to point out NDesk.Options. I think the results are pretty good. What displeases me is that this sort of interface is probably pretty generic and so if a user wants a similar frontend, they'd have to copy and paste 100 lines of code to get the same thing, which may be ridiculous if what they do only requires a few lines, like this.
Originally, the posts title was going to be "Polymorphic Entry Points." I'd love to have something like that. I guess this can be achieved by adding helper methods to each class we instantiate that requires more than 5 lines, but I feel there needs to be a clear distinction between application entry point and classes as I think it makes it easier for users to understand how everything fits together. Maybe I'm wrong and I'm probably over thinking this :).
I typically desire to do things right the first time and to not come back later. When I don't get things quite right, I acknowledge the problem but add it to my priority queue, so I may or may not ever get back to it. Recently, we've had an increase of interest in our system along with the a stage 1 completion of my groupvpn, so I felt it fitting to improve the usability of our project. Lucky for me, one my peers in the lab is very interested in technologies and was able to point out NDesk.Options. I think the results are pretty good. What displeases me is that this sort of interface is probably pretty generic and so if a user wants a similar frontend, they'd have to copy and paste 100 lines of code to get the same thing, which may be ridiculous if what they do only requires a few lines, like this.
Originally, the posts title was going to be "Polymorphic Entry Points." I'd love to have something like that. I guess this can be achieved by adding helper methods to each class we instantiate that requires more than 5 lines, but I feel there needs to be a clear distinction between application entry point and classes as I think it makes it easier for users to understand how everything fits together. Maybe I'm wrong and I'm probably over thinking this :).
Thursday, June 4, 2009
Dissertation Step 1
Main focus point: P2P VPN architecture with emphasis on scalablity, user-friendliness, and performance.
Scalability:
Scalability:
- Effects of latency on VPN connectivity
- Optimal peer count, cost per peer connection
- Trusting a P2P infrastructure
- VPN Router model
- Ability to deploy and manage large, distributed infrastructures
- GroupVPN - ability to use groups to manage VPN connectivity
- Userspace based, non-root VPN solution
- Use of proximity neighbor and shortcut selection
- Hybrid-based VPN model
- Inverted P2P VPN model
Labels:
Ph.D. Research
Wednesday, June 3, 2009
Out with the old, in with the new.
On Friday at 6 PM EDT, my computer with components of varied ages all over 2.5 years did the unthinkable, died. I'm not really sure the entire process, the motherboards capacitors near the PSU plug were all blown and may have been leaking for a while and the PSU made a loud bang as I tried to see if it was broken. Using another PSU, I was able to boot the computer, but I had no USB nor fans working, I even replaced the broken CAPs to no avail. The PSU of course was dead, but it so happened to destroy two hard drives as well, both of the western digital first generation SATA 3.0. Luckily, I may be able to recover the data if it happens that only the boards are dead and the drives internals are fine.
Around 10 PM, I e-mailed my advisor with the situation and 1 hour later, he had a one line e-mail telling me to price out a machine and aim for less than 1K. Monday morning we ordered the machine, and Wednesday evening, I had a fully functioning system.
I will admit that it really sucks that I lost a lot of time, potentially lot's of data, and my own computer, but on the flip side, I am very happy to be blessed to have a considerate professor who recognizes my need for a high performance desktop computer. The machine is amazing, a new Core i7 920 with 6 GB of RAM. Even typing in commands at the terminal seemed faster than ever before.
As a lesson to this all, I have decided to employ a mechanism of maintaining the same (or nearly the same) home directory on all machines via rsync. When I work on a remote machine, I will begin my session with a pull from the desktop and when I finish, a push. To perform this, I've written to wrapper scripts to rsync, one called push and the other pull.
Push:
rsync -Cvauz --force --delete -e "ssh -i /home/name/.ssh/key" /home/name/ name@desktop:/home/name/
Pull:
rsync -Cvauz --force --delete -e "ssh -i /home/name/.ssh/key" name@desktop:/home/name/ /home/name/
C - uses a ~/.cvsignore, though I could just use an ignore file. By doing this, I can have files that aren't synced, like my mozilla firefox plugins and other cache files.
v - verbose, probably unnecessary
a - archive, keep owner, user, group, modification times, permissions, symbolic lnks, and devices, as well as recurse directories
u - update, skip files that are only newer on the receiver
z - compress files prior to transferring
force - overwrite files with directories and vice-versa
e ssh - how to execute the command
Notice the ending forward slash ("/"), that is necessary or it will actually copy the directory and place it at the destination (so you'd get /home/name/name)
Beyond backing up my data, changes to my configuration files will be moved around with little effort from myself.
Around 10 PM, I e-mailed my advisor with the situation and 1 hour later, he had a one line e-mail telling me to price out a machine and aim for less than 1K. Monday morning we ordered the machine, and Wednesday evening, I had a fully functioning system.
I will admit that it really sucks that I lost a lot of time, potentially lot's of data, and my own computer, but on the flip side, I am very happy to be blessed to have a considerate professor who recognizes my need for a high performance desktop computer. The machine is amazing, a new Core i7 920 with 6 GB of RAM. Even typing in commands at the terminal seemed faster than ever before.
As a lesson to this all, I have decided to employ a mechanism of maintaining the same (or nearly the same) home directory on all machines via rsync. When I work on a remote machine, I will begin my session with a pull from the desktop and when I finish, a push. To perform this, I've written to wrapper scripts to rsync, one called push and the other pull.
Push:
rsync -Cvauz --force --delete -e "ssh -i /home/name/.ssh/key" /home/name/ name@desktop:/home/name/
Pull:
rsync -Cvauz --force --delete -e "ssh -i /home/name/.ssh/key" name@desktop:/home/name/ /home/name/
C - uses a ~/.cvsignore, though I could just use an ignore file. By doing this, I can have files that aren't synced, like my mozilla firefox plugins and other cache files.
v - verbose, probably unnecessary
a - archive, keep owner, user, group, modification times, permissions, symbolic lnks, and devices, as well as recurse directories
u - update, skip files that are only newer on the receiver
z - compress files prior to transferring
force - overwrite files with directories and vice-versa
e ssh - how to execute the command
Notice the ending forward slash ("/"), that is necessary or it will actually copy the directory and place it at the destination (so you'd get /home/name/name)
Beyond backing up my data, changes to my configuration files will be moved around with little effort from myself.
Subscribe to:
Posts (Atom)