Since my previous article Upgrading to PostgreSQL 10 on Centos 7 was so popular, I though I’d do a follow-up for anyone looking to upgrade a very simply configured PostgreSQL 10 server to PostgreSQL 11 on Centos 7.
First, and this goes without saying, backup your server!
Here’s a quick rundown on upgrading a very simply configured PostgreSQL 9.x server to PostgreSQL 10 running on Centos 7.
First, and this goes without saying, backup your server!
In these examples, I’m using upgrading from PostgreSQL 9.5. If you’re upgrading from a different version, just replace 9.5 and 95 wherever you see it with your appropriate version number.
Make any necessary changes to postgresql.conf . I’d recommend making the changes to the new version rather than copying over postgresql.conf from 9.5, since there are a bunch of new options in the PostreSQL 10 version of the file.
You can view your 9.5 configuration with:
nano /var/lib/pgsql/9.5/data/postgresql.conf
You can make your changes to the 10 configuration with:
nano /var/lib/pgsql/10/data/postgresql.conf
If you need to connect from other servers, make sure to change:
#listen_addresses = 'localhost'
to (apostrophes may not survive copy/paste, may want to hand enter)
listen_addresses = '*'
(or whatever is appropriate for you)
Now do the same with pg_hba.conf
View the old configuration
nano /var/lib/pgsql/9.5/data/pg_hba.conf
Edit the new configuration
nano /var/lib/pgsql/10/data/pg_hba.conf
Start the server
systemctl start postgresql-10.service
Analyze and optimize the new cluster
./analyze_new_cluster.sh
If everything is working, set the PostgreSQL 10 service to start automatically
Since my SSL cert was nearing expiration, I thought it would be a good idea to give Let’s Encrypt (free SSL certs!) a try.
Let’s Encrypt has a helper app called certbot that will configure Apache for you automatically. The really nice thing about certbot is that it will also (via crontab) renew your cert and configure Apache to use the new cert. This is useful, since Let’s Encrypt certs expire every 90 days.
To use certbot effectively, you need an Apache configuration that’s setup the way your distro expects. Mine was not (I hand ported the configs from Ubuntu), so I figured it was a good time to reinstall Apache with the default configs, then run certbot (official instructions here: https://certbot.eff.org/ ).
This initially seemed to work great, but I quickly noticed all of my subpages returned 404 errors. WordPress works best when you allow it to configure a .htaccess file to do URL rewrites. Allowing URL rewrites via .htaccess requires some additional configuration in your ssl.conf file.
sudo nano /etc/httpd/conf.d/ssl.conf
Add the following just before </VirtualHost> at the very end of your config.
<Directory /var/www/html/>
Options FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
</Directory>
Migrating a .Net 4.x Console Application to .Net Core
I finally got the server side of Winds of Paradise running in .Net Core! I thought I’d share how it did in, in hopes that it might help you do the same. As cool as Mono is, I’m totally psyched to have all my C# code running on Microsoft’s .Net under Centos Linux!
You can also find instructions there on how to install .Net Core onto wherever your code will be hosted.
Once everything is installed (block out a good hour for this), I opened my existing solution in Visual Studio.
To the solution, I then added another project and chose the type: Console Application (.Net Core)
Next, open that project and use the NuGet Manager to install any packages that you are using in your .Net 4.x project. For me, this was npgsql and Newtonsoft.JSON.
Once the new project is created, copy over all of your .cs files from your original project to the new .Net Core version.
Hit build, and start working on replacing any .Net 4.x functionality that is not available in .Net Core.
What I did was make corrections in the .Net Core version and then replicate those changes in the .Net 4.5 version. This way, I could build and run the old version with minor changes to prove the changes worked, rather than changing *everything* and trying to debug the .Net Core version. This worked, since the .Net 4.x encompasses everything .Net Core does.
Once you’ve worked out all of the bugs, copy all of your code over to whatever box your running on .Net Core on (Windows, Linux, MacOS, etc.).
Open a shell, cd in the folder with project.json and run:
dotnet restore
dotnet run
That’s it! .Net will download necessary packages from nuget, compile, and run.
Here’s some of the classes I had to find workarounds for:
System.Net.HttpWebRequest, System.IO.Stream.GetReqestStream:
The .Net Core version of this object only has async methods for GetRequestStream and GetResponse. You’ll have to move to GetRequestStreamAsync and GetResponseAsync, which also means having your methods return Task instead of void
Also, the .Net Core version does not have HttpWebRequest.ContentLength when doing a POST. So far, simply removing it seems to work fine in both .Net 4.5 and .Net Core.
NpgsqlDataAdapter:
I’m guessing this could probably be made to work with .Net Core fairly easily, but moving NpgsqlDataAdpater usage to NpgsqlDataReader for better database efficiency has been on my TODO list for quite a while anyway.
System.Configuration:
I found a great resource for setting up json configuration files at: https://csharp.christiannagel.com/2016/08/02/netcoreconfiguration/
A couple notes:
1) I had previously been targeting .Net 4.5. I needed to target 4.5.1 in order to install Microsoft.Extensions.Configuration
2) If you create appsetttings.json in your .Net 4.5 project 1st, make sure to create appsettings.json in your new .Net Core project, DON’T COPY it from from the .Net 4.5.1 one. Creating it new sets it as a “Content File”
System.Timers.Timer:
This was a tricky one. System.Timers.Timer allows easily add/subtracting events from occurring on the timer with Elapsed += [function]. This allowed me to have a couple of static timers that any other object could add events to.
The only alternative in .Net Core is the System.Threading.Timer. This timer is much less sophisticated. It can only accept one function to run at each tick, and this function cannot be changed. My workaround was to implement a separate timer for each object that needed one. I’m hoping this does not increase resource consumption. Hopefully a better timer alternative will work its way int nuget, I didn’t see anything that looked promising at the moment.
The rest should be added to the bottom of the config file.
I’m assuming you already have an SSL cert you are using for your website. You can use this cert for vsftpd as well.
Next, create a user that will be used for FTP.
It’s important to set the home directory with the “-d” option to where your website files are. I’m assuming the default /var/www/html.
sudo adduser -d /var/www/html [ftp-user]
Set a password for the user. Make sure to choose something secure!
sudo passwd [ftp-user]
Add the user to the apache group, so that it will have write access to /var/www/html/*
sudo gpasswd -a [ftp-user] apache
Make sure that apache has read/write to the WordPress files
To the best of my knowledge, these are the SELinux commands necessary for both the vsftpd as well as for Apache to FTP into the server and update itself.
SELinux booleans to enable the functionality we need
A quick reminder to myself (and you if you’ve come across my little site) to change SELinux file ACLs when uploading new files to be served by Apache (httpd) on Centos.
I have some Linux servers that I’d like to talk to my Windows Server 2012R2 file server.
Since I’d like daemons, rather than users, to be able to communicate with the server, I thought this would be a good candidate for NFS.
Linux Side (1st round)
(I’m using Centos, but the general concept will apply to Fedora, Ubuntu, etc.)
Install the daemons that will access the file server. Most of these will create their own users.
Create any additional users you would like to be able to access the file server. You can always add more later.
To save some complexity (and not assume you pay for Active Directory), I’m not going to have my file server look up Linux IDs via Active Directory. Instead, I’m going to use flat passwd and group files, just like Linux.
Copy (via SSH, USB, copy/paste, whatever) the passwd and group files from /etc/ over to your Windows server.
You can delete all of the entries for users/groups that will not be accessing the share.
Window Side
Copy the the passwd and group files to:
%SystemRoot%\system32\drivers\etc\
Create users (and groups) on your server with the same user name / group name as you created on your Linux server.
UPDATE: Make sure you set the Windows users to never have their passwords expire if they are service accounts. If they do, the users will lose access to the shares via NFS when the password expires.
The passwd and group files serve as a map between the user/group IDs in Linux and the user/group names in Windows.
Install Server for NFS on the Windows server.
Server Manager->Manage->Add Roles and Features
Next->Next->etc. until installed.
Browse to the folder on your file server you are looking to share.
Right click on it and choose Properties
Go to the NFS Sharing tab
Click the “Manage NFS Sharing” button
Check the “Share this folder” check box.
The only other change I make here is to uncheck the “Enable unmapped user access” option so that only users in the passwd file we copied over will have access to the server.
Next, click on the Permissions button at the bottom
I like to set “All Machines” to be no access, that way only the servers I specify will be able to mount the share.
Click the “Add…” button.
In the “Add Names:” box, enter the IP address of your Linux server.
Make sure Type of Access is set to the type you are looking for.
I prefer to leave “Allow root access” unchecked for a bit more security.
Press OK, OK, Close
If everything worked, the folder icon should now look like this:
Using the security tab, assign NTFS permission to the folder for the users you would like to be able to read/write to that folder, just as you would if it were an SMB share.
UPDATE for TVHeadEnd:
Many Linux daemons will use the same id for both the user and group.
Some, like tvheadend, will use different group and user IDs.
For these, it’s critical to setup a group with the same name (and with the user as a member) in Windows and assign permissions to the group as well the user.
Otherwise, you will get permission denied errors.
Linux Side (2nd Round)
Install the NFS client and enable (make start on boot) and start the services.
Create a folder that will be used as the mount point for the file server, aka: Where do I go to get to the files on the file server.
I was really hoping to find a definitive “this is where to mount nfs shares” article, but some Binging around came up with nothing.
I will therefore advise you create a folder under /mnt, as that feels right to me.
sudo mkdir -p /mnt/[server name]/[share name]
It’s finally time to give the share a test.
Run:
sudo mount -t nfs [server name or ip]:/[nfs share name] /mnt/[server name]/[share name]
If you receive an access denied error, you may need to specify NFS v3
sudo mount -t nfs -o nfsvers=3 [server name or ip]:/[nfs share name] /mnt/[server name]/[share name]
Make sure you are logged in as a user with permission to that folder and cd into it:
cd /mnt/[server name]/[share name]
You should now be able to create files and folders! (which will of course be visible on the file server as well)
The final step is to have the server automatically mount the share on boot.
sudo nano /etc/fstab
Add a line similar to:
[server dns name or ip]:/[share name] /mnt/[file server name]/[share] nfs defaults 0 0
If you needed the nfsvers=3 option earlier, instead use:
[server dns name or ip]:/[share name] /mnt/[file server name]/[share] nfs nfsvers=3 0 0
Give the server a reboot to test automatic mounting
sudo shutdown -r now
When you reboot, the share should be mounted and all is good in the world!
PS: If you are using this for transmission-daemon (which I’m assuming you’re using for legitimate purposes), make sure you edit your settings.json file and set umask=0, otherwise transmission will create folders that it cannot create files in.
I was working on getting Postgresql, Visual Studio Remote Debugger, and PHP running on Server 2012 R2 so that I can up my debugging-fu, rather than just relying on Console.WriteLine.
Ran into some DLL hell trying to get npgsql working. I saw NuGet mentioned while Binging for solutions, so I figured I’d give it a try. Where has this been all my coding life?
A few clicks, and remote debugging is up and running. I even copied the compiled files over to my production box, and everything is working fine in Centos on Mono as well. What a great way to spend a Friday morning off!