Wednesday, November 28, 2012

Working with Postgres on OS X Mountain Lion


Installation

Macs come with Postgres installed (9.1.4 for mine purchased in November 2012) (/usr/bin/psql), but I also installed the Postgres app, which appears to have installed 9.1.4 (downloaded in early November 2012).

  1. Install the Postgres app
  2. Start the Postgres app.  When I start the app, I get the Postgres elephant in my menu bar -- clicking it shows the port Postgres is running on.
  3. Use ps -ef | grep postgres to figure out where the path to the Postgres bin directory and the path to the Postgres data directory are.  The output will look something like this: 501 22332     1   0  9:28PM ??         0:00.25 /Applications/Postgres.app/Contents/MacOS/bin/postgres -D /Users/hm1/Library/Application Support/Postgres/var -p5432 where /Applications/Postgres.app/Contents/MacOS/bin/postgres shows the bin directory plus the executable, -D /Users/hm1/Library/Application Support/Postgres/var shows the data directory and -p5432 shows the port Postgres is running on.  Sadly, you can see that there are spaces in the path which means you'll have to enclose this in quotes where you use it.
  4. Add the Postgres bin to your path in the current session or to your .bash_profile.  e.g., export PATH=/Applications/Postgres.app/Contents/MacOS/bin:$PATH

Troubleshooting

Even with the Postgres bin directory appearing first in my PATH, unless I run the Postgres commands (psql, postgres, etc.) from , I need to specify the data files or host location.  If this doesn't work for you, you can alias the commands, uninstall the base installation or use the base installation,  Sarting a new terminal session seems to get around this issue where just sourcing the updated .bash_profile doesn't.

If you get the following error when issuing postgres, your path is not correct and/or Postgres is not running:

postgres does not know where to find the server configuration file.
You must specify the --config-file or -D invocation option or set the PGDATA environment variable.
If you modified the PATH variable via .bash_profile don't forget to source it or start a new terminal session
Using ps above you can see what the -D argument needs to be, or or add the Postgres path section prior to the existing PATH elements.

If you get the following error when issuing psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?
then you need to start with psql -h localhost, or add the Postgres path section prior to the existing PATH elements.


Resources

http://postgresapp.com/documentation
http://www.postgresql.org/docs/9.1/interactive/tutorial.html


Monday, October 1, 2012

JS Console

JS Console is a tool I enjoy for messing around with JavaScript without having to create a bunch of HTML (although you can work with the DOM in JS Console). This is proving very handy as I go through JavaScript books/tutorials and want to try code out quickly.  JS Console allows you to do remote JavaScript debugging as well.


How do I use JS Console?

Type or paste JavaScript into the text bar at the top.  You can also load JavaScript libraries (like JQuery) or DOM.

JS Console remote debugging

I don't do mobile development, but it looks like there's a neat way to use your computer's browser to debug JavaScript on your phone(!) or on another machine/session.  And there's a native iOS app.  I have not experimented with the remote debugging.


What can I access in JS Console?

DOM

Looks like you can get to everything in your browser.  If you type document in the text field, for example, you'll get the HTMLDocument object represented as a String.  


localStorage and sessionStorage

You can also view and manipulate localStorage and sessionStorage if supported by your browser (when I did this in IE7 I got undefined as these aren't supported until IE8).  

I added items to localStorage and remoteStorage, opened another Chrome window and verified that in Chrome and  Firefox separate tabs represent new sessions.  I also learned that JS Console uses sessionStorage to store your history.  And I saw that localStorage is browser-specific.  All Chrome windows and tabs saw what I'd added to localStorage in Chrome even after closing and reopening browsers.  Same thing with Firefox, but Chrome didn't see Firefox's localStorage and vice versa.

Sunday, September 30, 2012

JavaScript

I've been looking at JavaScript a bit over the past six months.  

Books
The most helpful titles I've read are:
Eloquent JavaScript
Intro to programming using JavaScript as the vehicle.  Covers programming fundamentals quite a bit.

JavaScript: The Good Parts
Less of a beginning programming guide and

Learning JavaScript (currently reading now)
Intro to JavaScript that assumes an understanding of programming fundamentals.

Tools
QUnit is what I've used to unit test my JavaScript (even if I'm not using JQuery's features in my JavaScript).  I find that unit tests are a great tool for helping me figure out how a language/framework works since they let me submit different scenarios easily, as well as helping me persist the questions I've asked and answered about a language/framework.
QUnit gave me some trouble around the setup, but it was worth the effort, especially once I got into writing more complex functions.

JSConsole is a tremendous tool that I just discovered (today!).  Those of you familiar with consoles for Ruby, Groovy, IO, etc. understand how helpful they can be for quickly answering implementation questions. For example, I wanted to know what would happen if I executed slice(-10) on an array with three members.  Instead of having to build the HTML, the JavaScript, connect the two and figure out how to view the results (alert, innerHTML, etc.) I used the console and it took me about a minute to create the array and try a bunch of slice overrides.

W3Schools Tryit Editor for JavaScript is a helpful tool if you want an easy way to experiment with DOM modification in JavaSript.  On the left side you have HTML and embedded JavaScript -- on the right side you have the output.  You can also use this to experiment with embedded CSS.

Browser debuggers/consoles are handy for debugging (using breakpoints) or monitoring DOM manipulation.  F12 on a PC keyboard (Windows or Ubuntu) will toggle the console for Chrome and Firefox (if you have Firebug installed for the latter).

Monday, September 10, 2012

dd-wrt for Linksys WRT54G v8

http://dd-wrt.com/wiki/index.php/Linksys_WRT54G_v8.0_%26_v8.2

My sister's WRTG54 v8 with stock firmware started hanging when distributing IPs so I decided to put dd-wrt on it.  I had done the same with my WRTG54 v1.1 with no problems so I assumed it would be a snap.  I was going to use it for a client bridge (see Andy Frank's post).

Ubuntu needed for tftp
Using tftp to upload dd-wrt.v24_micro_generic.bin didn't work from my Windows XP box, so I gave it a shot from an Ubuntu 12.x box and that worked.

Brick
Then I got myself into a jam since the primary subnet was non-standard.  I tried to change the secondary router (the one I just flashed), but couldn't connect to the web interface at all.  I did the 30-30-30 reset a few times and thought I bricked it.  The Power and WLAN lights no longer glowed, but the light corresponding to the port connected to my computer glowed.  I did the 30-30-30 reset a few times with no luck.

Wait, then unbrick
Then I let the whole thing sit overnight.  Next evening I was able to connect to to the dd-wrt web interface, and then found that it was serving IPs just fine.  

Whew...

Create client bridge
I gave the now-working WRTG54 v8 back to my sister.  I still didn't have what I was looking for: a client bridge for my ancient Pentium IV running CentOS.  I want the box close to me instead of close to the router behind the TV where it gets hot and is hard to reach.

I ran into issues with a primary (Belkin) broadcasting in N and a secondary (Linksys) only capable of B and G.  Also, the primary has different WPA2 options than the secondary.  And the secondary wouldn't let me select a channel in client bridge mode.  This whole thing is probably easier with two identical routers both running dd-wrt.

Success with compromises
Setting the primary to B and G mode, changing channel selection from Auto to a fixed value and moving from WPA-PSK+WPA2-PSK/TKIP+AES to WPA2-PSK/AES did the trick.  Plugged in Ubuntu laptop connected to the internet when I disconnected from the wired connection I'd set up per the dd-wrt instructions and connected using the Auto Ethernet option.

Back out compromises
Then I began backing out the changes I'd made to the primary.  Moving back to B, G and N worked fine.  Setting channel selection back to Auto didn't affect the secondary's ability to connect to the internet either.  Reverted primary to WPA-PSK+WPA2-PSK/TKIP+AES and still was able to connect to the internet through the secondary.  Seems like this is kind of what  Andy Frank did, but he may have found it possible to go back to WPA2 on the secondary.

Monday, June 18, 2012

cygwin windows processes and ps: the W option

http://cygwin.com/cygwin-ug-net/using-utils.html


Hard to believe I haven't been bothered by this before: if one runs ps in cygwin Windows processes are not displayed unless the W option is used.



$ ps -ef | grep post*
-- nothing



$ ps -efW | grep post*
       0    3220       0 ?          Jun 17 D:\PostgreSQL\9.1\bin\postgres.exe
       0    5684       0 ?          Jun 17 D:\PostgreSQL\9.1\bin\postgres.exe
       0    3808       0 ?          Jun 17 D:\PostgreSQL\9.1\bin\postgres.exe
       0    3072       0 ?          Jun 17 D:\PostgreSQL\9.1\bin\postgres.exe
       0    3620       0 ?          Jun 17 D:\PostgreSQL\9.1\bin\postgres.exe
       0     412       0 ?          Jun 17 D:\PostgreSQL\9.1\bin\postgres.exe

Unfortunately I've not found a cygwin way to kill the Windows processes.  I'm stuck with the Windows way:

$ kill -9 1600
-bash: kill: (1600) - No such process



$ taskkill /F /PID  1600

SUCCESS: The process with PID 1600 has been terminated.

Monday, May 28, 2012

Create Ruby on Rails development environment for Windows XP: DOS prompt and pik

This post details a Windows setup for the book Rails 3 in Action.  Having done both an Ubuntu 12.04 and a Windows XP RoR setup, I strongly recommend the former: much quicker and easier.


I was having some issues with my Cygwin/RVM install so I decided to try using the DOS command prompt and pik instead.



http://stackoverflow.com/questions/9189628/install-ruby-on-rails-on-windows
I think the answerer to this post incorrectly uses bundler to try and install gems (at least for me it gave an error, but using pik gem install <gemname> worked).


Note that for the console I'm using the Windows DOS command line instead of cygwin

Installed Ruby via the Ruby installer

    Added Ruby to path as part of the installer option

Installed pik

D:\> gem install pik
D:\> pik_install <directory_in_PATH>

Ran pik

D:\> pik
D:\> pik list ## confirm proper Ruby version



Installed DevKit

Ran installer to %DEVKIT_HOME%

%DEVKIT_HOME%\> ruby dk.rb init
%DEVKIT_HOME%\> ruby dk.rb install

Test the DevKit installation

%DEVKIT_HOME%\> pik gem install rdiscount --platform=ruby
%DEVKIT_HOME%\> ruby -rubygems -e "require 'rdiscount'; puts Discount.new('**Hello there, Eric!**').to_html"
Should output 
<p><strong>Hello there, Eric!</strong></p>

Installed gems using pik

I didn't find a lot of pik usage information, but I decided to install gems with pik.  Using 'gem list' and 'pik gem list' in different consoles leads me to believe that the installation methods acheive the same result.  Pik dosen't appear to have native gemset-like commands, but see this post for a hack.
D:\> pik gem install bundler --pre
D:\> pik gem install rake
D:\> pik gem install activesupport
D:\> pik gem install mysql
D:\> pik gem install libv8
D:\> pik gem install rails
D:\> pik gem install execjs
D:\> pik gem install jquery-rails


Installed locally downloaded gems

D:\> pik gem install linecache19-0.5.13.gem


This one required the source code for my Ruby version
D:\> set RVM_SRC=<path_to_source>
D:\> pik gem install ruby-debug-base19-0.11.26.gem -- --with-ruby-include=%RVM_SRC%




Errors

Gem::RemoteFetcher::FetchError: SSL_connect returned=1 errno=0 state=SSLv3 read
server certificate B: certificate verify failed (https://rubygems.org/gems/<gem_name>)

I got this above error when running bundle install.  See this post for several fixes.  This one worked for me, but I had to create the destination directory before running the script.

Sunday, May 27, 2012

Working on an existing Rails application in Ubuntu 12.04

My previous post described environment setup.  This post will focus on getting an existing application up and running.


Getting the application

I'm using Aptana and the existing project is on a bitbucket Mercurial repo.  Using the IDE I checked out the application.


The following commands are run from inside the project directory structure.

Bundler

Use bundler to get the appropriate gems for the project (see the file Gemfile).  I initially got errors due to missing packages on the Ubuntu side (postgresql-server-dev-all was not installed).


$ bundle


Database setup

Create databases

Looking at database.yml I see which databases I need to create.  Mine are postgresql (see the adapter fields) and two different databases.  Using psql's CREATE DATABASE <database_name> command I set both the test and  development databases up.


Create role

I needed to create a role for my Linux user using CREATE ROLE <role_name>


Populate databases

I needed to add execjs and therubyracer to my Gemfile.  I also needed to add `, :require => false` after my shoulda and shoulda-context gems in Gemfile per https://gist.github.com/1549790.  Per the link it seems that the load worked fine without the require => false.


$ rake db:schema:load


Run the application

$ rails server


http://localhost:3000 will display the RoR welcome screen.  You'll have to modify the URL based on your application.


Dude, where's my gemset?

If the .rvmc file for the project on which your working specifies a gemset that you don't have and/or that you didn't use to install the proper gems you can copy the one you did have setup to match the one specified in the file.  This is especially helpful if you want to use the Terminal in Aptana.
I needed to manually rerun the ruby debug installation because that failed during the copy.

$ rvm gemset list
$ rvm gemset copy 1.9.3-p194@<existing_gemset> 1.9.3-p194@<.rvms_gemset>
$ export RVM_SRC=$HOME/.rvm/src/ruby-1.9.3-p194
$ gem install ruby-debug-base19-0.11.26.gem -- --with-ruby-include=$RVM_SRC

Install Ruby on Rails development tools on Ubuntu 12.04

I ran into a number of issues installing this entire stack and then running a 1.9.3 / 3.2.0 application.  I won't detail the errors here.  The order here is somewhat important as the postgresql-server-dev-all was required during bundling.
Please see my next post for information regarding setup for a specific application.

Install postgreSQL

https://help.ubuntu.com/community/PostgreSQL

$ sudo apt-get install postgresql
$ sudo apt-get install postgresql-server-dev-all

Looks like database was initialized and started for me and user postgres was created.  I used ps -ef | grep postgres to figure out where the relavent directories used by my installation where.  The installation created the install in /usr/lib/postgresql/9.1 and the data directory as /var/lib/postgresql/9.1/main.  Configuration files in /etc/postgresql/9.1/main.


User postgres didn't have the postgresql bin directory on the path so I had to create a new .bashrc file for postgres so I don't have to reinitialize variables every time I open a console.  
export PATH=/usr/lib/postgresql/9.1/bin:$PATH
export PGDATA=/var/lib/postgresql/9.1/main

I modified pg_hba.conf to use the trust method so I wouldn't have to supply a password to the database.  This was done since the database.yml file that is part of the project assumes no password needed for db access.

Install adminpack to assist pgAdminIII

$ sudo apt-get install postgresql-contrib-9.1
$ sudo -u postgres psql < /usr/share/postgresql/9.1/extension/adminpack--1.0.sql
$ psql
postgres=# CREATE EXTENSION adminpack;


Install pgAdminIII

$ sudo apt-get install pgadmin3

Install Mercurial

$ sudo apt-get install mercurial

Install Aptana IDE

http://www.aptana.com/products/radrails


Install the Eclipse/Aptana MercurialEclipse plugin

Here's the update site:
http://cbes.javaforge.com/update

Install RVM

$ sudo apt-get update
$ sudo bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer)
$ source ~/.rvm/scripts/rvm
$ rvm get head && rvm-smile
$ sudo apt-get install libtool
$ rvm pkg install libyaml
$ rvm install ruby-1.9.3-p194

$ rvm --default use 1.9.3

Install Rails

$ gem install rails -v 3.2.0
$ gem install execjs
$ gem install therubyracer

Install bundler


http://stackoverflow.com/questions/6438116/rails-with-ruby-debugger-throw-symbol-not-found-ruby-current-thread-loaderro



$ rvm get stable
$ rvm use ruby-1.9.3-p194@<gemset_name> --create
$ wget http://rubyforge.org/frs/download.php/75415/ruby-debug-base19-0.11.26.gem
$ wget http://rubyforge.org/frs/download.php/75414/linecache19-0.5.13.gem
$ gem install linecache19-0.5.13.gem
$ export RVM_SRC=$HOME/.rvm/src/ruby-1.9.3-p194
$ gem install ruby-debug-base19-0.11.26.gem -- --with-ruby-include=$RVM_SRC
$ gem install bundler --pre


Dude, where's my gemset?

If the .rvmc file for the project on which your working specifies a gemset that you don't have and/or that you didn't use to install the proper gems you can copy the one you did have setup to match the one specified in the file.  This is especially helpful if you want to use the Terminal in Aptana.
I needed to manually rerun the ruby debug installation because that failed during the copy.

$ rvm gemset list
$ rvm gemset copy 1.9.3-p194@<existing_gemset> 1.9.3-p194@<.rvms_gemset>
$ export RVM_SRC=$HOME/.rvm/src/ruby-1.9.3-p194
$ gem install ruby-debug-base19-0.11.26.gem -- --with-ruby-include=$RVM_SRC




Wednesday, April 11, 2012

Oracle SQL dynamic order by clause

I never had a dire need for this before today, but it is something I could have used many times in the past to simplify queries and/or use a query instead of a function.


SELECT *
  FROM table alias

ORDER BY DECODE(:bindVariable, 'option1', alias.column0, 'option2', (alias.column1 * alias.column2)) DESC NULLS LAST,
         alias.column3 ASC);

It has been around for at least a decade so, although I don't write a ton of SQL or PL/SQL, I'm surprised I haven't seen it before today.

Restoring "lost" windows on Windows 7

I'm not sure why but Firefox always seems to disappear on me.  If I maximize it from the Task bar I can see it but when I click to resize it disappears.


This post helped me to get it back on screen in less-than-maximized mode.

Wednesday, March 28, 2012

Linux process information: STIME date and time

I've been using "ps -ef | grep <some_string>" for several years.


But I found this post today and it helped me solve a problem that required more specific STIME information.


Here's the command
ls -ld /proc/<pid>
where <pid> is your, um, PID.

Sunday, March 11, 2012

Installing Linksys PCI wireless firmware on Ubuntu 11.10

Wireless card on vintage Gateway (2003) spotty after moving from XP to Ubuntu, but I dug through the old hardware bin, found a WPC54G PCI card and was able to get it up and running.

https://help.ubuntu.com/community/WifiDocs/Driver/bcm43xx

Saturday, March 10, 2012

Installing Git on Centos 5.8

http://kb.liquidweb.com/install-git-on-centos-5/
http://thebuildengineer.com/index.php?itemid=11


> yum install gcc
> yum install zlib-devel
> wget http://git-core.googlecode.com/files/git-1.7.10.rc0.tar.gz
> tar xvsf git-*.gz
> cd git-*  
> ./configure
> make
> make install

Saturday, February 18, 2012

JDBC Diagnosability


I'm working on an application that uses a cached ConnectionPool based on the latest Oracle JDBC driver on the WebLogic application server (version 10.3). Since the pool is not controlled by WebLogic, it is very difficult to use P6Spy or other tools to see what SQL the application is generating (I've not been able to find any way to use SQL inspection tools with the current connection configuration). SQLTrace will give statement information (including performance numbers), but will not give visibility to bind variables. Finally, if there are issues with JDBC objects on the Java side, SQLTrace is of little help.
This page offers configurable logging around JDBC operations, including bind variable visibility and column-level return value visibility. Sadly, you have to manually match up your bind variables with their respective positions in the statement, but at least bind variable values are available to you.
You'll know if you've successfully configured JDBC Diagnosability since you'll get a lot of red text appearing in the console (when using Eclipse at least) as soon as you begin to communicate with the database.
This document goes into greater detail.

Server side diagnostics

Below are the steps I took to get the debug information to print to the console. These instructions assume you are using startWebLogic.cmd to run WLS.
  1. Locate ojdbc*_g.jar in your WLS installation. There's one for JRE 5 and one for JRE6. I found these in %WLS_HOME%\wlserver_10.3\server\ext\jdbc\oracle\11g\
  2. Modify startWebLogic.cmd to place the jar at the front of the classpath:
    set CLASSPATH=C:\wls1032\wlserver_10.3\server\ext\jdbc\oracle\11g\ojdbc5_g.jar;%SAVE_CLASSPATH%
  3. Create OracleLog.properties and place it in a directory on the WLS instance. See the example at the bottom of this post. Unfortunately I've not found a good comprehensive explanation of this file, but I've thrown together something that works for console logging. Logging to a file is not working for me as of now.
  4. Modify WLS start script by adding -Doracle.jdbc.Trace=true and -Djava.util.logging.config.file=<OracleLog.properties>
    -Doracle.jdbc.Trace=true -Djava.util.logging.config.file=C:\wls1032\wlserver_10.3\server\ext\jdbc\oracle\11g\OracleLog.properties

JUnit diagnostics

If you've got a situation that you want to debug and that you can recreate using JUnits you're in a good place. Your test will typically take less time to run than any operation via the GUI and you don't need to undeploy, shut down, restart and redeploy to a server constantly.
In the Run Configurations view for your test class/method do the following:
  1. Add to VM arguments on the Arguments tab
    -Doracle.jdbc.Trace=true -Djava.util.logging.config.file=C:/wls1032/wlserver_10.3/server/ext/jdbc/oracle/11g/OracleLog.properties
  2. On the Classpath tab, click Add External JARs and locate your ojdbcX_g.jar file (where X is '5' or '6' per your JDK version), then click Up to make sure this comes before any other things in the User Entries selection
  3. Run your test using the Run Configuration you've modified.
OracleLog.properties example
# OracleLog.properties Copyright Oracle 2007

# Controls output of java.util.logging output for Oracle JDBC drivers

# See the Javadoc for OracleLog for more information.

# The OracleLog system uses the services of 
# java.util.logging.*  This file is a starting
# point for controlling that output via a properties
# file. Note that there is also a programatic interface 
# for java.util.logging which may be used as well. That
# would allow finer grained control and the ability to
# change the logging as the program runs.

# Copy this file to your runtime directory to use as a
# starting point. You should expect to change it to
# suit your needs. 

# To enable logging controlled by this file start your
# main java class with the swtiches

# -Doracle.jdbc.Trace=true 
# -Djava.util.logging.config.file=OracleLog.properties
# -Djava.util.logging.config.file=C:\wls1032\wlserver_10.3\server\ext\jdbc\oracle\11g\OracleLog.properties

# See also the file logging.properties in the jre/lib directory
# in your JDK installation and the JDK documentation.

# default file output is in user's home directory.
java.util.logging.FileHandler.pattern = C:\wls1032\wlserver_10.3\server\ext\jdbc\oracle\11g\app_jdbc.log
java.util.logging.FileHandler.limit = 500000
java.util.logging.FileHandler.count = 1
java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter

# log to the console by default
handlers = java.util.logging.ConsoleHandler

# for sqlnet tracing uncomment the lines below
# oracle.jdbc.diagnostics.DemultiplexingLogHandler.pattern= jdbc_%s.trc
# oracle.jdbc.diagnostics.DemultiplexingLogHandler.formatter = java.util.logging.SimpleFormatter
# handlers = oracle.jdbc.diagnostics.DemultiplexingLogHandler

# default is to log everything generated. control what is generated below
java.util.logging.ConsoleHandler.level = ALL
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter

# JDBC uses the following Levels. Levels lower than FINE produce output that
# may not be meaningful to users. Levels lower than FINER will produce very 
# large volumes of output.
# INTERNAL_ERROR  Internal Errors
# SEVERE          SQLExceptions
# WARNING         SQLWarnings and other invisible problems
# INFO            Public events such as connection attempts or RAC events
# CONFIG          SQL statements
# FINE            Public APIs
# TRACE_10        Internal events
# FINER           Internal APIs
# TRACE_20        Internal debug, sqlnet tracing (oracle.net.ns.level)
# TRACE_30        High volume internal APIs
# FINEST          High volume internal debug

# Uncomment and/or change the levels for more detail
oracle.jdbc.level = FINEST
oracle.jdbc.connector.level = FINEST
oracle.jdbc.driver.level = FINEST
oracle.jdbc.internal.level = FINEST
oracle.jdbc.oci.level = FINEST
oracle.jdbc.oracore.level = FINEST
oracle.jdbc.pool.level = FINEST
oracle.jdbc.rowset.level = FINEST
oracle.jdbc.util.level = FINEST
oracle.jdbc.xa.level = FINEST
oracle.jdbc.xa.client.level = FINEST
oracle.jpub.level = FINEST
oracle.net.ns.level = FINEST
oracle.sql.level = FINEST

Version Control with Git

Updates: 11/21/2012 

I'm now using Git with a Subversion repository at work

About all I do to interact with Subversion are git svn rebase and git svn dcommit.  The first command I use to update my local master and the second to commit to the remote Subversion repository.  We create branches for just anything but the most minor change, rebase to the local master and commit to the Subversion repo from there.


git stash

My favorite find (and one which I didn't see mentioned in my edition of Pragmatic Version Control Using Git) is git stash.  git stash lets you throw changes in a locker and brings your current branch back to the last checkout state.  I use this all of the time, but one frequent use is to quickly test whether and/or which of my local changes is causing a test to fail or online functionality to get weird.  git stash list shows all of the stashes that you have, uh, stashed.  git stash apply brings changes back from the stash.

Git UI tools

At my new shop developers are all on Macs.  Some folks use GitX and others use SourceTree, but I've found the command line is quick and easy enough.  And I prefer the Eclipse file diff functionality to the diff mechanisms of either of the aforementioned GUIs.  Since Eclipse isn't in the loop with my local Git repo, I just compare with local history if I have  a large enough change I need to diff.



From 02/18/2012

I've worked quite extensively with CVS and Subversion.  With the latter I've actually set up and administered my own repositories.  But I never read much about either VCS outside of picking through http://svnbook.red-bean.com/ when I couldn't figure out how to do something.


I've been using Git on an off for about six months for a variety of non-work projects and figured it was time to understand what I was doing so that I could properly maintain branches and tags and conduct merges.  Enter Jon Loeliger's Version Control with Git for a good (albeit somewhat old -- 2009) overview.

Turning  local repo into an authoritative remote repo
I had a local repo that I wanted to share with my labmates.  The process involved cloning my local repo to another location

$ git clone --bare ./local_repo remote_repo.git
$ zip -r remote_repo.git.zip ./*
$ mv remote_repo.git.zip /gitpub/Depot
$ unzip remote_repo.git.zip

and then modifying my local repo to point to the new remote one

$ git remote add origin "//192.168.1.1/gitpub/Depot/remote_repo.git"
$ git remote update
$ git branch -a
  * master
  remotes/origin/master


Eclipse integration
Plugins
EGit seems to work pretty well.  I'm having some issues with pulls and fetches, from my repo that used to be the authoritative one, but the command line gets me what I need.  Push and synch are fine.

Diff local with remote repository

  • Right click on the project or a specific directory/file
  • Compare With -> Branch, Tag or Reference... -> Remote Tracking -> origin/master


User stuff
Indexes
I can modify a file, run git add and then continue to modify the file.  The index will not be updated with my local version of the file until I run git add again.

What I'm not used to from other VCSs is the fact that, from the command line, I can add a file (place changes in the index) and have  a different version of the file in the local repository.  With Subversion or CVS I just think of the repository version and my working version -- there's not a 'staged' version in any index.  When I execute a commit, only the changes that are in the index are committed -- the local directory file is not committed.

Diff local with remote repository

From the command line, git diff origin/master --name-status will give you the names of the different files and not clutter up the console with the differences themselves.


Fetch versus pull
I saw a pull described as "a fetch followed by a merge".


Admin stuff
Examine object (blob, tree, etc.) content.  This output could be the contents of a file or a directory.
git cat-file -p <dir_prefix><rest_of_SHA1_for_blob>


List files/directories in an object
git ls-files -s


Other resources
Pro Git

Find DOS-style short names in Windows: dir /x

Having recently begun working on Windows 7 I was disappointed to find that MSFT still has spaces in important directories (Program Files, for instance): this is often a cause of syntax issues, usually when running *sh scripts from Cygwin.


In most cases quoting the path name works, but not always.  I've found that the short style DOS directory name works in most cases where quotes fail.


C:\>dir /x
03/17/2011  08:42 PM                25              AUTOEXEC.BAT
05/05/2011  09:33 PM    <DIR>                       bea
07/19/2010  10:37 AM               211 BOOT_G~1.INI boot_GDISK32_copy.ini
08/26/2008  06:45 AM                 0              CONFIG.SYS
12/21/2011  10:39 PM    <DIR>          DOCUME~1     Documents and Settings
07/19/2010  10:19 AM    <DIR>                       DRIVERS
03/17/2011  08:42 PM                91 GPROLO~1.BAT gprologvars.bat
07/19/2010  10:44 AM    <DIR>                       Intel
02/04/2012  08:45 AM    <DIR>          PROGRA~1     Program Files
11/10/2011  08:40 AM    <DIR>          QUARAN~1     Quarantine
10/24/2010  09:23 PM    <DIR>                       TEMP
02/16/2012  07:58 PM    <DIR>                       WINDOWS

Thursday, February 9, 2012

PL/SQL debugging with SQL Developer: good for custom type parameters


Before reading this I had had issues with debugging functions/procedures where one or more parameters is a custom type.  And getting it to work smoothly with a Java debug session/JUnit/etc. wasn't happening for me.

Thanks Sanat!





Sunday, January 29, 2012

iPhone reference

Setting up Google Sync and viewing multiple calendars
http://www.idownloadblog.com/2010/07/28/how-to-sync-shared-google-calendars-with-your-iphone/
Basically, Google Sync is setting up your Google account as an Exchange account.  Still have extra steps, however, if you want to see calendars other than your primary calendar.  For example, my wife wants to see my calendar entries on her phone as she does online.