I have been running kvm with a bridge on my fedora machine for a while, when trying to run docker on the same host access to the kvm hosts would die as soon as docker came up. I read up online and most places mention that the two technologies should be able to co-exist with each other without any problems, after some further searching I was able to find an article that mentioned that if you have already setup a bridge for KVM you can tell docker to use this bridge. I was able to test this on my setup and it worked allowing docker to run without interfering with the existing bridge. Looking at the interfaces I can still see docker created an interface
but it doesn’t seem to be active.
The docker configuration involved adding this file
with the following entry.
Make sure the bridge number matches your existing bridge number in use by kvm.
I finally managed to migrate my wordpress DB to the new server after several weeks of making multiple db dumps and restorations which would fail to load all the content from the old DB. I tried different tools that would export the DB as sql files and used same tools to import even straight command line dumps, but somehow my posts would not display correctly the titles and dates would display but not the post body content.
While trying to trouble shoot the cause I opened the sql file and started running each tables insert query one at a time as you can expect this sucked really bad, but I was ready to do it this way as nothing else was working. When I tried to paste the insert statements for the posts table something weird happened only one character would paste into the sql console which was weird as I expected the several hundred lines I had copied to show up. To look at this some more I posted the text in an editor that could format sql and half way down the file the SQL highlighting changed and displayed half the post as text. This normally happens when there is a character that isn’t escaped properly, it also explained why the backup imports would fail. The weird part with the backup failure is none of the tools would throw an error as you would expect if there was an error with the insert queries. While the curious part of me wanted to investigate this further the rest of me was tired with several weeks of trying to migrate the wordpress data and tried to find another way of moving the data without having to export to SQL files.
The solution I went with was DBeaver which has a data export method of type database which will let you copy a table from one DB to another without having to rely on SQL dumps. This worked like a charm even though it still took a good 10 minutes to copy over everything and keep validating as I went. Like most sites I have an DB backup scripts but since it looks like the backups created in this method may not be of use if I needed them in the future am thinking of just having another offline DB and keep that synced with my main db and use the offline DB for any restorations I might need.
Like most people I used to put my 401K in safe funds when choosing safe versus risky fund options. My yearly returns also reflected my choices, most of the time I would average 5% or less (mostly less).
A big problem with this method of saving for my retirement was the fact that most of the money in my 401k was coming from the deductions on my check. This would actually be ok if I started working and funding my 401k at 21, but life happens and I started later in my twenties.
To increase my returns I started following the daily performance of all the funds available in my 401k and I would manually rebalance my account every few months. I also started having larger balances in more aggressive funds and would rebalance every few weeks. This helped raise my yearly returns a few percentage points in some years but not always.
I decided to write some code to automatically rebalance my account for me since I was loosing money in a lot of situations when things would come up or I would forget to log in and rebalance my account. In 2 and a half years that my code has been running it was able to double the balance that was there when it started running, averaging about 20% returns at this rate I will have enough to retire in about 10 yrs if I stop my contributions and less than 10 if I keep adding my contributions.
If my code had not worked my other option would have been to pay for one of the services offered to manage our company’s 401k plans for a certain fee. Right now am happy my money is working just as hard as I do, but I also understand not everyone is in the same position am in since my day job involves writing code.
Personally, I would suggest if you are trying to improve your 401k returns try and invest more time on a regular basis (days,weeks not months) and learn how the different funds you have available are performing and keep rebalancing your account frequently, some plans will also let you set how you want your account rebalanced every month/pay period automatically for free. If you have some more time and skill you can learn how trading systems work and build something that is able to evaluate your funds and rebalance your account accordingly. The last option which I haven’t tried would be to pay a service to automatically rebalance your account, I feel even if you are going to go with this option to first do option one so that you can use that as a baseline to gauge the paid services performance.
Some people argue that your 401k should be treated like a marathon where you setup your preferences and leave it and only rebalance a few times a year. This is what I did with my account for almost ten years and in 2.5 years of aggressively rebalancing as the market changed I was able to add to my account as much as I had been able to contribute in 10yrs, without having to depend on salary increases to increase my contributions. So for me am sticking with the sprint, if it means I will need fewer years to build my retirement nest to where I would like it to be.
Note: All thoughts written here are my own, and should not be taken as financial advice, am not a financial expert just someone who likes to solve real world problems using code.
Just brushing up on some Angular2 and run into this error it didn’t make sense since I was using a tutorial from the official angular site. It turns out the language is changing faster that the documentation can keep up with :). The cause for the error was using
with ngFor instead of
so instead of
*ngFor="let Item of Items"
My previous angular2 apps are locked to earlier versions of angular thats why I hadn’t run into this issue, but this time around I was starting a new project with the latest angular2 version.
The word test here is used ambiguously as I use selenium for more than just UI tests, it makes a great tool for browser automation which I use it for this purpose a lot. While writing your browser automation most of the time it’s easier to do it in browser mode by using the firefox or chrome driver so that you can visually inspect the HTML. Once you are done writing the code and finished testing sometimes you would prefer to switch it to headless mode so that it can be run without having a UI, at which point you are likely to tryout PhantomJS driver and your fully tested code starts throwing all sorts of errors like “Element not found” or “Stale Element Exception”. If all this errors go away if you switch back to chrome or firefox driver then the likeliest cause of your troubles is you might need to add delays in most places where you have the browser loading new data compared to the other browser drivers.
To me this seemed a bit counter intuitive at first as I though headless mode should run faster therefore requiring even less time to load UI changes, but I guess it might actually take a little longer since all the browser rendering is being done is software only. Just thought to put this out there as I have run into the issue a few times.
I recently switched from self signed certs to free SSL certs from letsencrypt and for the first time I could load my https url without getting the annoying prompt from chrome due to self signed certificates. The only problem is the certs expire pretty fast in about 90 days as of this writing, while this is nothing to complain about since the certs are free handling the renewal each time manually would be a pain and also leave me in a bind in case I forgot to do it.
I decided to automate the renewal process to save myself the hassle of having to do it manually and found two resources here and here on how to do it, I went with a combination of the two methods as my requirements were different.
I wanted the renewal to be run from a script to support email notification on success or failures which is similar to the first source and use the webroot plugin to perform renewal as it has lesser steps to perform renewal reducing any failure points during the process like the second source. The script needed to be able to run everyday and check cert expiration I didn’t want to hard code the cron job to run based on how long the certs are valid that way if letsencrypt changes the life of the certs no change is required on my side.
Let’s get started I won’t cover the install as that’s covered by letsencrypt site, I would advise you to read the different install methods and choose the one that best fits your needs.
After performing the install
Create your config file which will contain the arguments submitted to letsencrypt api I named mine “muthii.com.ini”
rsa-key-size = 4096
server = https://acme-v01.api.letsencrypt.org/directory
text = True
authenticator = webroot
agree-tos = True
renew-by-default = True
email = firstname.lastname@example.org
webroot-path = /your/webserver/path
Run the command used to create/renew your certs, which creates the certs for you and shows you the path to find them.
/root/.local/share/letsencrypt/bin/letsencrypt -c /path/muthii.com.ini -d muthii.com -d www.muthii.com auth
Only run the above command if you haven’t created your certs or are ready to renew your current certs, otherwise just grab the script file and add it to your cron. Make sure to change the emails and file paths based on your setup. I have commented out the echo statements and only enable then for testing
For someone doing this for the first time locate your ssl.conf file used by your server and set the paths to the new certs
Once you are done setting up head over to SSLLabs and test your certificate is recognized as expected, then setup a cron job to run the script daily .
0 2 * * * sh /path/SSLRenew.sh
I hit this error while adding a samba mount to my fstab, but mounting the same end point would work when executed from command line. For my scenario it turns out that it might be an issue with cifs-utils or kernel if your mount point is under more than one sub-directory. My solution was to go with option 3 and have my target as a share
"//host.IPAddress/share/subdir/subdir/target" - This failed with error "CIFS VFS: cifs_mount failed w/return code = -2"
"//host.IPAddress/share/target" - This worked
"//host.IPAddress/target" - This worked
After a recent Owncloud 8.o.x update I started getting this error being logged whenever the owncloud cron job run. To resolve the issue I had to change the cron job to be run as the user apache.
su -s /bin/sh apache -c "php -f /path/to/owncloud/cron.php"
The webserver on CentOS is run under the user apache, on other linux flavous it’s www-data to find out what it is on your system just check the error being logged it will log the user running the webserver.
Console has to be executed with the same user as the web server is operated
Current user: someuser
Web server user: apache <- This is the user you want.
Sometimes you may need to have the java JDK available on your system without having to run the installer exe/msi for different reasons. This solution worked for me with JDK8u45, I haven’t tried it with other JDK versions so your mileage might vary.
Thanks to @Marc T for this solution.
create destination folder (c:\jdk8)
download JDK exe from Oracle
7zip -> unzip into destination folder
Open command prompt and enter cmd [cd c:\jdk8]
Enter this command to unpack the contents of the folder [for /r %x in (*.pack) do .\bin\unpack200 -r "%x" "%~dx%~px%~nx.jar"]
This error is very generic and while googling I found different issues can cause it. I was able to resolve it this particular instance by creating the folders
/tmp/.X11-unix - as root
/tmp/.ICE-unix - as user logging in
Which had been deleted while manually cleaning up a previous session. The statement below was also logged when this error occured but it to appears to be a generic error logged for different cases whenever a session fails.
596 Session startup failed