Monday, July 22, 2024

GitHub and SSH. How Do I Make It Work?

 If you have a GitHub repo but haven't used it in a while, or if you're just setting one up, you may have noticed the old username/password approach is not an option anymore when trying to push from your local repo.

My weapon of choice (at least, for Java development) is Eclipse, and I wanted to be able to push/pull and all that good stuff using the UI.  There are a few gotchas to that, so here's how to set up ssh with your GitHub account.

First, you'll want to create your RSA key.  This is straightforward.  Just open your favorite console application and type:

ssh-keygen -t ed25519 -C "youremailaddress@somemailprovider.com"

And go through the questions it prompts you.  You DO want to create a passphrase when it asks you.

Now in your user home folder will be a folder called .ssh.  (It may be hidden.)  Go into that folder and you'll see the public key you just created.  (Hint: It's the file that ends in .pub)  Open that up in a text editor and copy the contents of that file onto your clipboard.  An easier way to do that that ensures you won't accidentally pick up any spaces in your clipboard is the command:

clip < filename.pub

This does the same thing.

Now log into your GitHub account.  On your account page, click the little icon with your picture in the upper right corner and select "Settings."



On the left of the settings page, select "SSH and GPG keys."


Click the "New SSH Key" button.


Give it a title and then copy the contents of your clipboard into the "Key" text area.  Finish up by clicking the "Add SSH Key" button.

As Bon Jovi would say, we're halfway there.


Now we need to configure Eclipse.

Open up Eclipse and select "Preferences" under the "Window" menu.


Under General > Network Connections select "SSH2."


The SSH2 Home field should already be populated, if not, make sure it points to the .ssh folder in your home directory (Or, if you like to be a rebel, point it to whatever folder you put your new RSA key in instead of the default.  I won't judge.)

Hit the "Add Private Key..." button and use it to pick your private RSA key.  That's the file that matches the name of the public key but without the ".pub" extension.

Under the Authentication Methods tab, make sure publickey is checked.


Now, back to the list of Preferences.  Select Version Control (Team) > Git.

Make sure "Store SSH key passphrases in secure store" is checked.  The first time you connect to your GitHub repo it will prompt you for the passphrase you gave it when creating the RSA key.  After that, it will be secured in secure store and won't ask you for it again.



Not done yet...

When you first connect to a GitHub repo you'll have some configuration to do on the connection protocol.  If this is a repo that already exists,  you can get to those settings by:

Right click the project in the Package Explorer window.

 Select Team > Remote > Configure push to upstream...



In the line where it shows the URI, click "Change..."


Under Connection, change the Protocol setting to ssh.  Under Authentication, make the username "git."

I know that isn't your GitHub username.  Trust me.  Put "git" in there anyway.


Now test it!



It isn't very complex, but there are some gotchas.  Follow these steps and you should be good to go.


 


Thursday, July 4, 2024

Things that can't play nice. EACCES when installing Angular

Have you ever had one of those experiences when you try and install something on your system and it really feels like absolutely every. single. step takes a lot more effort and hassle than it ought to?  Before long you're exhausted from fighting your machine, you feel more lost than you did when you started, and now you're wondering what it would take to roll your system back to the point before you started so you don't have all the extra junk on your system from the battle you've been having?

Yeah.

It is often the case that we work with technologies that are constantly changing.  Angular, for example, releases a new version every 6 months.  That usually isn't a big problem if you already have it installed and running.  Then, all you have to do is update.

But what happens when you're setting up a new environment?  When something goes wrong, you head on over to Stack Overflow, Baeldung, or whatever your favorite source of advice is and look for the answer.  Simple enough, we do that all the time.  But what happens when there are a dozen different methods of doing what you want, some of which still work on the latest versions, some don't, and there's always a variety of other approaches that will lead you astray because whoever offers that advice is making certain assumptions about your environment that may not be accurate.

I decided I needed to refresh my Angular skills (among others) so I created a new Ubuntu Linux VM on my machine.  I prefer Linux environments to Windows anyway.

"Well you know, ArcticFox, installing Angular on Windows is very easy.  Just do that!"

"Easy" is not the goal.  The goal is to learn something, exercise skills, and avoid giving Microsoft more data to sell. 

So we take our Linux VM, running Ubuntu 24.04 LTS.  What's our first step?

Step 1

Well, first we need our JavaScript framework, Node.js.  It should be at least version 14.20 when installed on Ubuntu 24.04.

After doing the usual sudo apt-get update to be sure all of your packages are up to date...

Your screen should look like this.  Don't get used to it,
 things will not continue going this smoothly for long.

Great, so all of our packages are up to date so we use apt-get to install nodejs and npm.

sudo apt-get install nodejs

Now, some sources will tell you to use the command "apt-get install nodejs npm."  Do not do that.  npm comes with nodejs anyway so there's no need to specify it, and if this is a fresh install then some of the package dependencies won't yet have been installed and you'll get a nasty error that will have you chasing your tail needlessly.  (Is there ever a time when one does need to chase their tail?)

This is what it looks like without errors.  If yours looked
like this on your first try, then pat yourself on the back.

Now we see if the installation went well.  We can do that by seeing what happens when we run node and npm each with a -v.  This proves that the commands are now in our path and shows us what version we just installed.

node -v
npm -v

Hopefully, your screen looks something like that with no 
"Command not found" errors.

Now, if at some point something went wrong and you're getting an error because of missing dependencies when installing nodejs don't panic.  Just go ahead and install them.  This line of code should cover everything you need:

sudo apt install curl gnupg2 gnupg git wget -y

Some of this stuff is probably already on your system but it won't hurt anything to include them in the command.

If you had dependency errors, yours won't look like this because it will have been 
installing the stuff you were missing.   Hopefully.

And now, we install Angular using the Node Package Manager.  Note that we are not using sudo here, because we're letting the Node Package Manager handle everything.  In fact, it would be a BAD idea to use sudo here.

npm install -g @angular/cli

It is at this point that some people, those who were favored by the Universe, will be able to continue.  The rest of us will  have to deal with this hideous error:


The Pain Train has left the station.

The problem with any "permission denied" error in Linux is that it could mean you don't have permissions, but it could also mean the destination file path doesn't exist.

Now, I know that the output there suggests running the command again as root.  (By using sudo).  Do not do it.  I know it's tempting.  It would probably even work to get past this step, but if you do you're asking for headaches down the road.

So the script failed when it tried to do a mkdir on /usr/lib/node_modules.

So first let's make sure the node_modules folder exists.


There it is!

Ok it's there.  Note also the permissions.  The owner is root, and nobody else gets write permissions.

What, if anything, is already in that folder?

Stuff.  Stuff is what's in that folder.

So that folder was indeed created when we installed npm, but we did that with sudo, remember?  So of course the folder is owned by root.  

No, that doesn't mean rerunning the Angular install with sudo.  Chill.


Now WE have the power.

Ok so now you own the folder.  Let's run that install again.

Chooo choooooooo!

Same error!  Wait... no... not the same.  Now we have a problem where it wants to create a symlink in another folder that it doesn't have permissions on.  Namely, /usr/bin.  Now, I'm not really a fan of taking ownership of that folder.  It can have unexpected side effects, especially if anyone else uses that same machine.  So what we can do is give write permissions just for now without changing the owner.

sudo chmod o=rwx /usr/bin

Now the folder will look like this:

Now everybody can write to it...  Hm...

Now we run the install once again.


And you don't even have to use the wrong argument like I did to test.  Neato!

Once you run the install once more, you should see no more errors.  Now when you enter 

ng --version

You'll get a response with the version of Angular that's installed on your machine.

Now, I'm not a fan of leaving critical Linux folders with wonky permissions so let's put /usr/bin back the way it was and test to be sure we're still good.

Presto!

And now we can even check to see what new stuff we have 



And now the @angular folder being present tells us that all is right with the world.  I'm inclined to leave ownership of the node_modules to myself since it isn't a critical Linux folder and since I am the only one who will ever use this VM, but if you're sharing a machine with others you'll want to be sure that folder is accessible to them as well if they're going to be doing any work with npm, since they may have to install modules there as well.
















Monday, July 19, 2021

Connection refused: Why is my UNIX socket client giving me this?

Language:  C++

Environment: Ubuntu 20

Application:  Creating a UNIX Socket server


If you go out onto the Internet looking for help with this, you'll find oodles and oodles of articles, forum posts, blog posts, tutorials and everything in between explaining how TCP sockets work and how to create them, use them, debug them, take them out to the park and run around with them...

...but UNIX sockets get very little love, and that's not so good when you run into a problem with one.

(The easiest way to tell the difference is to look at the code presented by one of these sites.  Look at the socket() system call.  If the first parameter is AF_INET, you're working with an Internet socket.  If it says PF_UNIX, you're where you want to be.)

A UNIX socket is essentially a special file.  That's it.  That means all the stuff that you could run into when working with files can also impact you when working with UNIX sockets.  The beauty of this is that if you have any experience working with files from your C++ program, then you already have the experience you need to debug problems with your UNIX sockets.

Suppose your "Connection refused" error comes right after a line that looks something like this:

connect(server_,(const struct sockaddr *)&server_addr,sizeof(server_addr))

Inside sockaddr (which is a struct of type sockaddr_un) is a field that holds the path name.  Yes, that means file path.  So what might cause a connection refused?  Well, what sorts of things cause a failure when you try to open a file?  Maybe the file doesn't exist.  Maybe the process doesn't have access privileges on that file.  (See where I'm going with this?)

When the connect() function tries to open that UNIX socket, the socket needs to:

  • Exist.  Don't try to manually create it yourself using mkdir or touch.  That socket gets created when you run a UNIX socket server.  
  • Be a socket.  This is why you don't create it yourself.  When the socket server creates the socket and calls the bind() system call, the socket can now be connected to by a socket client.  Yes, that implies that you have to start your socket server before your client.
  • Be accessible.  Whatever user your process is running as needs to have permissions on that socket in whatever file directory it exists in.  
This is what a socket looks like on the file system when you do ls -l:

srwxrwxr-x  1 arcticfox arcticfox    0 Jul 19 21:13 my_sock

This one is in the /tmp directory, which is a nice, safe, out of the way place.  Notice the first letter for the file type is 's' as opposed to 'd' which it would be if it were a directory.  That's how we know it's a UNIX socket.   

So if you've done everything correctly but your client is refusing to connect, check these three things.


 So it's been a while since I've posted anything here.  It isn't that I died or stopped developing software.  It's that I'm in a radically different field of development and have prettymuch left web development behind for the time being.  

Now I'm working in Artificial Intelligence and robotics.  That is exactly as cool, and as terrifying, as it sounds.

I haven't posted lately because much of what I've been doing over the last few years has either had very little relevance to development in general or it was proprietary stuff that I couldn't share publicly even if there was some use in doing so.  

In my current position though, I get to work with lots of open source and/or non proprietary software as well, and I'll try to share what I learn as I learn it, as before.

Monday, December 18, 2017

Problems with Primefaces Picklists

Primefaces is awesome. JSF is awesome.

Know what isn't awesome? The documentation.

Don't get me wrong... The Primefaces document and the Showcase demo site are very good for introducing the components and the basics on how to use them. The problem is that sometimes they're a little too basic. For example, the picklist component. It says right in the Primefaces PDF that most often, pojos will be used with the picklist, and that it is necessary to implement a Converter to handle it... and that's it. It goes on to show how some facets can be used to make the component look more spiffy, but the developer is left with very little information on what the Converter is for, exactly and how to implement it.

That's why this post exists.

So here's an example that will hopefully help.

<p:picklist converter="#{shipConverter}" itemlabel="#{ship.name}" value="#{backingBean.pickDLM}" var="ship"/>

So here, the converter is just the name we give to our instance of the ShipConverter class, which implements Converter.  itemLabel is the field within one of the contained objects that will be displayed in the picklist windows.  Value is the DualListModel which is defined in the backing bean and serves as the source of data for the picklist.  Var is the reference to each individual item, the same as in a table or tree.

So how does that DualListModel work?

In your backing bean, you'll need your data source for the picklist. 

DualListModel<Ship> pickDLM = null;  //matches the value field on the view
List<Ship> shipSource = DatabaseSourceObject.getShips();
List<Ship> shipTarget = newArrayList<Ship>();

Notice that the source object is a list of Ship objects which will populate the left pane of the picklist.  The target is the right pane, and is initially empty.  If your project needed to pre-populate a few items in the right pane, you would add them to the target object here.  So now we put it all together:

pickDLM = new DualListModel<Ship(shipSource, shipTarget);

So then all you need in your backing bean is the getters and setters for the DualListModel. 

Now for the Converter...

@FacesConverter(value="shipConverter")
@Managedbean(name="shipConverter")
public class ShipConverter implements Converter{

  public ShipConverter(){}//Constructor Stub
  
  //This method provides an implementation for converting the strings in the pickList view back into the objects they represent.
  @Override
  public Object getAsObject(FacesContext context, UIComponent uiComponent, String value){
  pickList pickList = (PickList)uiComponent;
  DualListModel listModel = (DualListModel)pickList.getValue();
  for(Object item : listModel.getSource()){
    Ship ship = (Ship)item;
    if(ship.getName().equals(value)){
      return ship;
    }
  }
}

  //This method provides an implementation for getting the String value of an object in the picklist.
@Override
public String getAsString(FacesContext context, UIComponent uiComponent, Object object){
  String result = ((Ship)object).getName();
  return result;
}
}

That's it.  The idea is to specify exactly how to switch back and forth between the collection objects and their String representations in the picklist.

Enjoy!


Tuesday, April 11, 2017

CentOS 7 Networking in VirtualBox

I see a lot of results when Google searching on this problem and there is a wide variety of solutions, so this one may or may not work for you. It's a hybrid of a couple others I saw, and was arrived at through trial and error.

The environment:

Windows 7 (Host OS, and no I don't have Admin rights) CentOS 7 (Guest, I have root privileges) Oracle VM VirtualBox 4.3

Installing CentOS 7 into VirtualBox requires that you explicitly tell the VM that the ethernet cable is plugged in. That's no big deal in and of itself, but there's some configuration to be done.

Open the file /etc/sysconfig/network/scripts/ifcfg-enp0s3

Note that the filename is based on the name of your ethernet connection. If you don't know what that is, run an ifconfig command to find out.

Modify the following two lines in the file:

BOOTPROTO=shared change to BOOTPROTO=dhcp

ONBOOT=no change to ONBOOT=yes

Reboot your CentOS and verify. That's the approach that worked for me.

Monday, March 6, 2017

Liferay 7 Portlet pom.xml Issues in Eclipse

So I'm learning my way around Liferay 7 and one of my first efforts was to create a sample portlet for it.  The issue I was experiencing was that two of the dependencies were not able to resolve:

        <dependency>
            <groupid>com.liferay.portal</groupid>
            <artifactid>portal-service</artifactid>
            <version>${liferay.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupid>com.liferay.portal</groupid>
            <artifactid>util-java</artifactid>
            <version>${liferay.version}</version>
            <scope>provided</scope>
        </dependency>

The problem is that these dependencies, being auto generated by the Eclipse Liferay plugin, are using the older Liferay 6 naming conventions.  The newer naming conventions look like this:

        <dependency>
            <groupid>com.liferay.portal</groupid>
            <artifactid>com.liferay.portal.kernel</artifactid>
            <version>2.22.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupid>com.liferay.portal</groupid>
            <artifactid>com.liferay.util.java</artifactid>
            <version>2.2.2</version>
            <scope>provided</scope>
        </dependency>

Note, the artifactId now has a '.' delimited format instead of '-,'  and the version isn't the same as the Liferay version.  A complete list of the new names appears here.