Now that we have our new WSL only development environment in VSCode we can extend our developers environment to support a Google Cloud Platform back-end. The side project I am working on is hosted in GCP so it’s natural to extend our environment to support this. In another article we will create a React Native application to listen for Firebase Cloud Messages in GCP.
This deployment follows the instructions located on GCP’s website linked here for the latest version:
That command executes a full installation of the Google Cloud SDK, I tried the apt-get install first but it wound up not working as it couldn’t find certain packages in the repository. First thing the script will ask you for is the base directory for your package installation, which in my case I chose the default /home/<username>
That means from now on, your most recent version of the Google Cloud SDK files will be located in:
This doesn’t exactly match where our React Native project files will be or the location of our android SDK path. So it’s important to note. It makes sense to install here however as the SDK tools are really only for your specific project and the specific cloud access you need.
The script then proceeds to download all the appropriate files and code, asks if you want to contribute your anonymous data for error reporting and then kicks off the install.
Once the installation is complete the installer will ask if you would like your PATH modified to enable shell command completion, I recommend this.
Accept the default file for this if it is your current .rc file and then restart your WSL instance.
You should be good to go now! You can test it out in a new project folder by typing:
The script will take you through several steps to link your SDK to google cloud platform, including authenticating to your GCP account, what cloud project you are using and what default compute engine zone you would like to use. Once that is complete you should get a message similar to this:
Congratulations, you are now linked to Google Cloud Platform!
I have been working on a lot of side projects lately. My latest thing I am going to be working on is a React Native app with push notifications using Google Cloud’s Firebase Cloud Messaging.
Before I do anything with that though, I need to get my development environment going. I have been using VSCode for a while now with Arduino and for some Python work on Rapsberry Pi so I would like to stick to using that. I also want to use Windows Subsystem for Linux since this has basically been the best thing that has happened to Windows in a long time.
First thing you need to do is install Windows Subsystem for Linux following the instructions from Microsoft:
After we have installed VSCode, launch it for the first time. You might get some errors, ignore them for now. A nice but somewhat unknown feature about VSCode with WSL is that if you run VSCode from within WSL (with the command “code”) it will actually launch the native Windows executable.
Install Git for Windows
Since we want to work with source control on this project (as there will be a lot of moving parts) we need to install Git for windows.
Restart VSCode and open the termianl bash should launch as your native shell. An important thing to note is, your bash prompt should look something like this:
This is a critical change to understand, when you are dealing with paths in windows they look like this:
But that translates into this:
This is your mounted C drive. So if you are keeping your files in:
You get to that folder by:
From this directory, I create/clone projects. Then using Bash, I cd into those directories and npm install / bundle install whatever I need. Each subdirectory acts as its own project folder. Each of these can be committed to git or branched/updated differently.
We installed Git for Windows but we should also add Git into our WSL then we can work in Bash in VSCode and work from within our WSL as well (omit –global and cd to project directory to set this as specific to a project)
"From within your new project directory"
"Do some work, add some files"
git add *
git commit -m "Commit message"
Now log into github and create a repository that matches the project folder name you chose for this new project, set it to public or private and initialize with a README if desired. Git will then give you the appropriate commands to upload your repository:
Next we install the React Native CLI, installing Node gave us the Node Package Manager npm which allows us to install further versions of Node and Node supporting code. So now we install the React Native CLI
sudo npm install -g react-native-cli
After that we install the default JDK
sudo apt install default-jdk
At the time of writing the latest version of java is 1.8, so you should get the following output from the “java -version” command
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-8u212-b03-0ubuntu1.16.04.1-b03)
OpenJDK 64-Bit Server VM (build 25.212-b03, mixed mode)
Now to extend the functionality of the WSL we want to install an Xterm system in windows. The one I have tested is MobaXTerm which you can find here:
Download and install the home edition for windows. This will be required to launch any sort of graphical development component from WSL. So depending on your workflow and what you are developing you may or may not need to install this part. Once it is installed you will need to set your DISPLAY variable in WSL.
Navigate to your home director and edit your .profile file. At the end of the file you will want to add the following line:
Now save the file and exit your linux and restart it. Now your DISPLAY is exported to the primary display on your workstation and combined with MobaXTerm this will allow graphical users interfaces to launch through your WSL. Your linux subsystem will require some extra files to make that happen though so to do that you should run the following install:
Now we are going to install Android Studio within WSL to give us the full Android tools. You could actually install just the command line tools and packages, but I want to experiment with the whole suite. Download the linux distribution for your version of linux at the follow link
I installed android studio into /usr/local, and because it is a .sh script just adding it to your path won’t work easily. So, going oldschool here and adding a simlink to /usr/local/bin back to it to spoof an executable.
sudo ln -s /usr/local/android-studio/bin/studio.sh studio
Once the installation is complete, we need let Ubuntu know that you are going to share your Ethernet connection using a bridge. This will require discovering which ethX interface you are using for your primary network. Run a “ifconfig -a” and find the ethX entry with the same IP as your primary network connection in Windows. Once you have found that, edit your /etc/network/interfaces file and add the following lines (our example uses eth1):
Once you have walked through the install Android Studio will launch for the first time. Create a new basic application and open it. That will cause Android Studio to download and install the latest version of gradle and integrate it with your development environment. Once that is complete it will complete a build of your test application. After that you can setup a physical deployment target for testing. This is a bit tricky!
As you can see above, no connected devices are showing. You can get this to work but in a non-intuitive way. To connect to a physical device android studio relies on Android Debug Bridge (adb) and the version of adb needs to exactly match in both WSL and in Windows.
You can check the version in both shells using the command “adb version”. It might not work immediately in powershell and you may have to add it to your path. What you will likely find is that the version in your powershell and the one in WSL are not the same. They need to be the exact same version.
My Linux version was not the latest and the package installed through apt-get universe was not the latest, so I had to manually install the latest and change those executables in /usr/local/bin
At this point my Linux version of adb was reporting 1.0.41
I wound up having to install the package manager Chocolatey https://chocolatey.org/ and using that to manage the versions of ADB in powershell. This command installed Chocolatey from within an Administrator Powershell
Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
Then I restarted my powershell and installed the correct (latest) version of adb using this command:
choco uninstall adb
choco install adb
Once those versions match, you can shut down the adb server in WSL using the command “adb kill-server” and you can start the adb server in powershell by calling “adb devices” Which should list your attached android device (as long as Developer tools are unlocked and you have allowed access on the Android device). If you run the adb version command in WSL you should see the exact same output for your device.
Now you can compile and run your app in Android studio to the connected device. I will keep trying to make WSL work but this is a reasonable way to test using WSL as your entire development environment.
Interestingly, the windows native BASH shell in VSCode inherits all the configuration we have done above too. That means, from within VSCode you can launch android studio from the terminal, you can use react native cli, manage all your source through Git, and you can use any other linux subsystem tools that you install, all through VSCode.
Configure to build React Native Applications.
First thing we should do is install a couple supporting packages. Specifically yarn (execute these lines, 1 at a time):
Once these variables have been added you can execute “source $HOME/.profile” and “echo $PATH” to verify the variables have been added to your executable environment.
Creating your first React Native app
Now that everything is set up you can move into your development directory and start a brand new project by typing the following command:
react-native init <Project Name Here>
This will kick off creation of a project folder and the automated download of all required libraries etc. There may be warnings about core packages being out of date or missing dependencies. These appear to be safely ignored (I might learn otherwise as I start building more react native apps!).
Assuming that everything installed correctly and your test device is properly connected and visible from Android Studio as shown earlier you can now execute your react native project with the following commands.
This should fire up a JS server, kick off a multi-threaded build, the first time it may take a while, after it is fast.
If the build fails, make certain ADB in PowerShell is running the daemon for USB devices and not the one in WSL.
Once that is corrected, run the react-native run-android command again. This time you should be greeted with (If it returns to a prompt, the app won’t run properly, RN should be running in the shell):
And on your device:
Congratulations! You have just built your very first application in React Native on WSL using VSCode for all your development. Now go drink a stiff drink, that was a lot of work.
Well, Automatons Adrift is finally back after a malicious hacking by… someone. In any case, I was able to recover all of the posts and I am going through doing some updates and changing things up making everything a little more informative and colorful!
Virtually every electronic device around has one of these and the technology hasn’t changed substantially in decades.
From your cell phone, to your tv remote, your car to your laptop, they all have batteries. One of the more common types now seen is the lithium ion battery. It works by moving lithium ions around between an anode and a cathode. This is the same process that every other battery follows. Depending on how many ions your anode can store, you can store more charge in the battery.
Now what if you could increase the capacity of your anode by 1000%. Your laptop would could run for 10 hours on a single charge, your iphone could keep going for 8 or 9 days instead of 1, your electric car could have ten times the range all on a single charge.
Researchers in South Korea have made this a reality. Li-ion batteries with 1000% increase in storage! Their system works by increasing the surface area of the anode by making it extremely porous. Like a sponge holds water, the anode can hold exceptionally more lithium ions allowing for a much more substantial stored charge.
Consider the possibilities this holds for nanotechnology. Right now scientists are constructing tiny devices but the restricting factor is the battery, it can’t hold a charge for the tiny device to function for any length of time. This increase could allow a 5 minute charge to last hours or to provide the power to become far more effective. Wireless sensor networks restrict broadcast functions as much as possible because they are cost expensive. This would allow those nets to function much more effectively.
That is just one application. Now that they know this can be done, the race will be on to make even more capable anodes, we could be on the virge of a battery revolution, something that has been a long time coming.
Ok, well not really the evolution of time in the sense that time is everwhere in the universe and is an innate feature of it. Instead this video shows the evolution of a clock. It is a very elegant representation of how a genetic algorithm can evolve surprising and different solutions to a problem. For anyone that doesn’t really understand genetic algorithms this video can give you a feel for how they work. It is quite exciting to see a solution evolve from a simple program.
This video actually comments on the long standing debate between creationism vs evolution. I do not wish to comment on that debate, I simply want to educate on the use of evolutionary algorithms. We can leave creationism vs evolution to those who enjoy the debate. If you would like to jump ahead to the beginning of the experiment go to 1:34 in the video.
Achieving a scientific discovery has been the sole domain of humans for quite some time. However, artificial intelligence is starting to catch up. Enter “Adam” an artificially intelligent robot with the tools to analyze biological data, formulate a novel hypothesis about that data, derive a set of tests to verify the hypothesis and then carry out those tests, and present conclusions from the data!
Adam’s repeated experiments resulted in the system identifying a few unknown genes in baker’s yeast that work together to form an orphan enzyme, this relationship was not previously known and Adam independently identified it.
Tools like this can allow human researchers to think at even higher levels, theorizing about broad spectrum experiments and leaving the systems to explore the frontier for them.
Doing a little technical work with Automatons Adrift right now. I just upgraded the wordpress to 2.7.1. I might even change the theme. Though if I can’t find something I really like I will probably alter the code of this theme instead.
Right now we have resistors, capacitors, and incutors we also have four basic circuit variables current, voltage, charge and flux. I am not an engineer but my understanding is, that with these variables you should be able to have four different components and not just three. The fourth component known as a memristor was first theorized in 1971 by Leon Chua. This theoretical fouth component would have properties that were not reproducable by the other three components alone or in any combination.
Both New Scientist and an even more mainstream magazine Maximum PC have reported on the memristor. This component is simply that big a breakthrough!
The memristor is indeed different from its brothers. When a charge is passed through the circuit in one direction its resistance increases, when it is passed through the other direction it decreases. This amount of resistance is analog as well. This means a memristor can store values that don’t just equate to one or zero although their first use will probably be to make solid state memory that is faster than any memory that already exists and has a data density greater than a factor of ten from todays solid state memory like flash memory. It is much less volatile than flash memory as well so it will last longer, and the state of the memristor can be read using similar techniques to todays memory cirtuits so it it doesn’t require any fundamental changes to underlying hardware.
Since memristors can store large amounts of data in such a small area they are perfect for memory components in nanoscale machines, the ultimate new automata of the future. They can also double as processor components that are dynamically changed as needed. So not only can the memristor function as a memory it can be made to function as a processor as well.
Memristors due to their analog nature function very similarly to human neural networks. The memristor can retain data from reinforcement learning very easily, a few hundred memristors can simulate a full human neuron in a similar amount of space.
Memristors are truly an amazing breakthrough and could lead to a paradigm shift in toadys technology.
Moving through busy environments is a complex task that is becoming increasingly easy for computer systems to perform on the groud. The task is made even harder while flying if you want to fly low to the ground. New obstacles become serious problems, like power lines and overpasses. A team at Carnegie Mellon University have modified a UAV to fly low to the ground successfully.
The UAV can have a pre-loaded map or it can build its map as it flies using its laser range finder sensors. The lasers sweep an oval pattern out front of the vehicle and develop a dynamic map. If an obstacle is detected the system plans a route around it. Current UAV’s cannot fly low to the ground because they lack a system like this. This new system is planned for deployment in unmanned medical rescue helicopters.
Probably also for highly tactical seek and destroy missions too, but I will hold out hope that it is only used for the rescue helicopters.
To test these virtual humans they are going to deploy them in World of Warcraft. The virtual soldiers should be able to convince humans they are real, going as far as emulating emotion and using local slang correctly while responding to questions and communicating effectively with other players.
It appears the Army is attempting to tackle some very complex AI and Nanotechnological problems. If one of their AI soldiers does manage to fit in properly in World of Warcraft, how far off is it from passing a full Turing Test? Does it have full understanding and is able to relay tactical situation information back or does it just manage to fit in without any higher awareness of its situation?
They may be biting off more than they can chew, however the Army/DoD have been responsible for several massive technological advances. Anyone ever hear of this thing called the internet?