Back in Bordeaux since March 2019.
Hey, I am Nicolas, 25-years-old french web developer, I own a master's degree in Software Engineering from the University of Bordeaux. Thanks to my degree, I have the essentials skills of a software developer : software architecture, programming and project managing. Aside from courses, I grew interest in web development where I learnt the main technologies by myself. I am able to design the full stack of a website, only my major strength is working around the server side.
After my end-of-studies internship of 6 months in London as a web developer in a major 3D printing industry company, another experience of 8 months as also a web developer in a small friendly software company in Bordeaux, I decided to earn more freedom by working as a freelancer.
Interesting by freelancing, I was now and then browsing jobs to see if there was something interested. So I had the idea to design an interface for centralise freelance projects in a same place like Indeed.com is doing.
Thanks to the web scraper that I had already made, I designed a daemon script running endlessly on my VPS server for gather job offers on multiple freelance platforms. Once done, I bound it to my HTTP server by IPC connection and I worked on a simple web interface based on Bulma and ReactJS.
The projects newly posted of different platforms are scraped and sent to every browser client connected (thanks to socket.io) for being displayed to the user. The app is also composed of a filter box which will allow to add/remove source platforms and to filter out projects depending on keywords selectable among a keywords list.
The project is still a prototype because the web scraper is sometimes fickle, but it works well and may be used here. Currently, there is 5 platforms (Freelancer.com, Peopleperhour.com, Guru.com, Truelancer.com and Twago.com) available but I really would like to add Upwork because it is the world biggest freelance platform. I am presently trying to use their API for getting their public data, it should be amazing if they could permit me to do that.
EDIT : Even if very interesting, I gave up the project, the sources of the web scraper can be found here. This way, there is nothing anymore to see on the platform.
My third freelance job (earned too on fiverr.com) was a little NodeJS script able to read keywords from a products spreadsheet, then get back the first Google Images URL associated to each product, for afterwards upload the first relative image to the images host platform imgur.com, and then fill out the spreadsheet with the returned Imgur URL.
If you want to do the same or something similar, you can check out the project source code here. You need to claim an access token to use the Imgur API and also to provide a xlsx file for read keywords.
My main project is a personal web server, designed for run my applications and this website, it is been a few years I am on it, I implement new features and enhancements now and then. It is based on NodeJS/ExpressJS and supported by a MongoDB database. The server is able to serve many static or dynamic applications, to run third-party scripts or tools and to manage socket connection thanks to Socket.IO. Currently, my server is running on an little virtual private server based on Linux.
The server is mainly composed of 4 parts :
Each application module (in "app_modules folder) must implement 3 methods called at the beginning by the server :
Current server modules (services callable in application modules) :
The server has 3 running mode (CLI parameter) :
This project is a elementary file hosting platform that allows to upload and download files in the simplest and fastest way as possible. The main purpose was to accentuate my knowledge in AngularJS v1. I used the stack MEAN (MongoDB, ExpressJS, AngularJS, NodeJS) and the CSS framework Bootstrap for design the application.
The application includes the user account management. This way, you can register, log in and log out on the website. If you own an account, you can manage your files (edit filenames, remove files, check downloads counter and last download date).
I personally use this platform for store some small files of mine, you can use it as well if you wish (the website is in french), but do not put too much and too big files, I am runnning out of space on the server. The project is available here.
The application allows to create/update/delete records of movie in a very simple approach. We dispose of a view displaying the movies collection as a grid (thumbnails and titles), a view containing a form for create a movie and a view containing a form for update information about a movie.
I have no use of this application but it might be a good demo for show AngularJS in action. You can take a look here (by security I deactivated the update and delete methods).
It is a functional prototype of application conceived for learn ReactJS that provides a way to download single or multiple videos as different formats from Youtube, but not intended to be released for real usage. The application is a ReactJS application (server-side rendered), composed of a text input (accepting Youtube URls) and a component list which will display some information (title, thumbnail and duration) of each video. Another interesting point into this app is the dynamic socket end-to-end communication, with Socket.IO.
Firstly, the user inserts one or several youtube video URLs (or playlists URLs). After submission, the server loads the video list and return, by socket communication and after each video loaded, the video data which are displayed client-side dynamically as ReactJS components. At the end of the loading process, the user selects the video to download (or all), the format of each video (AAC, FLAC, MP3, M4A, Opus, Vorbis or WAV) and the format of the archive (Rar, Zip, 7z, GZip or Tar). Once submitted, the server download each video with the wished format and build an archive. Finally, the user can download this archive.
Once the client deconnected from the server (socket connection), all the files are destroyed server-side.
About the communication with the Youtube API, the server handles a command line tool Youtube-dl, a famous and quite reliable third-party software for get metadata, download and convert videos from Youtube in an easy way. But sometimes the CLI tool may be capricious and rather slow, and consequently not very efficient for a production application.
The application was originally designed to allow me to download my personal Youtube music playlists all at once. The app is available here, you can use it for take a look on the process but do no use it for your personal usage.