Blog

  • harbormaster

    Harbor Master

    Harbor Master is a simple script to help manage Docker remote socket forwarding.

    Specifically, this script uses SSH forwarding to forward the Docker socket of a remote machine to your local environment. This is quite useful when running heavy Dockerized applications, which can be run on some remote/cloud server, while being treated as a local container.

    The benefit Harbor Master provides is that not only does it maintain the SSH tunnel for the remote socket, but it also port forwards exposed container ports. This is a severe
    oversight in the current Docker implementation, and Harbor Master just makes your life that much easier :^).

    Pre-Installation

    Harbor Master requires Python3

    Before you can use Harbor Master, you must have a trusted host that is already running docker. In addition you must be using passwordless SSH to connect to this host and have
    already done the key transfer. Harbor Master does not, and (probably) will not, manage/accept passwords for SSH connections. These are insecure and add unnecessary complexity. In
    addition, please read the SSH Configuration notes below to ensure your remote host has the proper configuration.

    Installation

    Harbor Master is available on PiPY and can be installed via a pip install harbormaster, or pip3 install harbormaster if you have multiple python versions.

    Alternatively, you may clone this repository, install the docker python package as specified in the requirements.txt, and then copy harbormaster.py into your path.

    Usage

    usage: harbormaster.py [-h] [-p P] [-l P] [-v] user host
    
    Automatically port forward the docker socket and container
    
    positional arguments:
      user        User to SSH as
      host        Host to SSH to
    
    optional arguments:
      -h, --help  show this help message and exit
      -p P        Local port for forwarded socket, default 2377
      -l P        Legacy TCP port to use instead of socket for Docker API
      -v          Verbose output
    

    For example:

    harbormaster.py dubey 192.168.1.111
    

    This would connect to a machine on the IP 192.168.1.111 as user dubey, establishing a Docker socket tunnel on port 2377. Once this command is run, you can let the Harbor Master manage all the SSH tunnels necessary as containers go up and down.

    Important Notes

    SSH Configuration

    Most *Nix distros come with sane defaults for the number of SSH connections allowed to a host, usually 10 concurrent connections. If you plan to have more than 10 ports forwarded,
    then you must change the sshd config located at /etc/ssh/sshd_config and change the parameters:

    MaxSessions 100
    MaxStartups 100
    

    In the above example, the host will accept 100 concurrent connections, allowing you to port forward 100 ports.

    In addition it is highly recommended to disallow password SSH login, and only use SSH key files.

    Version Notes

    As of v0.1, Harbor Master assumes that you are using zsh and will modify your ~/.zshrc file by appending a export statement that lets any new shell sessions use the forwarded Docker socket. Harbor Master does cleanup on shutdown: all SSH tunnels that are open, and any changes to the .zshrc file.

    Visit original content creator repository
    https://github.com/tanishq-dubey/harbormaster

  • server

    OpenLegends Game Server

    Build Dependencies crates.io

    A lightweight, power-efficient server with in-memory session storage for playing multiplayer games using the local Asset API

    Important

    Project in development!

    Install

    Dependencies

    Debian
    sudo apt install curl
    Fedora
    sudo dnf install curl

    Rust

    Use rustup installer to setup latest Rust environment:

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

    Server

    cargo install openlegends-server

    Launch

    Arguments

    • --asset, -a required, game Asset
    • --bind, -b required, bind server host:port to listen incoming connections on it
    • --log, -l optional, log level (ednw by default):
      • e error
      • d debug
      • n notice
      • w warning

    Start

    openlegends-server --bind 127.0.0.1:4321 --asset test
    Test connection
    nc 127.0.0.1 4321
    Create systemd service
    1. Install openlegends-server by copy the binary compiled into the native system apps destination:
    cp /home/openlegends/.cargo/bin/openlegends-server /usr/local/bin
    1. Create configuration file:
    # /etc/systemd/system/openlegends-server.service
    
    [Unit]
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=simple
    User=openlegends
    Group=openlegends
    ExecStart=/usr/local/bin/openlegends-server -b 127.0.0.1:4321 -a test
    
    [Install]
    WantedBy=multi-user.target
    1. Run in priority:
    • systemctl daemon-reload – reload systemd configuration
    • systemctl enable openlegends-server – enable new service
    • systemctl start openlegends-server – start the process
    • systemctl status openlegends-server – check process launched

    See also

    Visit original content creator repository https://github.com/openlegends/server
  • AXACT.AndroidSample

    AXACT

    Axiros AXACT - Android Wrapper is a wrapper for AXACT

    This project contains a sample application to demostrate how to use TR-069 and TR-143.
    AXACT is embedded in this project as an android library.

    This version opens a blank activity AXACT is started as service on application run and is configured to run as background service. in order to stop the service uses developer options to see running tasks:

    or call, fron any Activity:

    stopService(new Intent(this, AxirosService.class));
    

    LIB proguard rules

    To use minifyEnabled build on your APP please add the following line to your proguard-rules.pro:

    -keep public interface com.axiros.axact.AXACTEvents {*;}
    

    Compiling on Android 6.0 (API level 23)

    Beginning in Android 6.0 (API level 23), users grant permissions to apps while it
    is running not when they install it. In order to request the required SDK permissions,
    please be sure to call the verifyServicePermission method before binding the
    service and implement the onRequestPermissionsResult. Its return value will tell
    where the bind can be made. A full sample can be found on MainActivity.

    @Override
    public void onRequestPermissionsResult(int requestCode, String permissions[],
                                           int[] grantResults)
    {
        /*
         * The option has already been chosen by the user. The bind can happen
         * here.
         */
    }
    
    @Override
    public void onCreate(Bundle savedInstanceState) {
        if (AxirosService..verifyServicePermission(MainActivity.this) == false) {
            /*
             * Permissions were already given by the user. The bind can happen
             * here.
             */
        }
    }
    

    Tested intensively on multiple devices at AWS Device Farm

    As it is compiled using Android NDK it can run on multiples android version.










    Visit original content creator repository
    https://github.com/axiros/AXACT.AndroidSample

  • AXACT.AndroidSample

    AXACT

    Axiros AXACT - Android Wrapper is a wrapper for AXACT

    This project contains a sample application to demostrate how to use TR-069 and TR-143.
    AXACT is embedded in this project as an android library.

    This version opens a blank activity AXACT is started as service on application run and is configured to run as background service. in order to stop the service uses developer options to see running tasks:

    or call, fron any Activity:

    stopService(new Intent(this, AxirosService.class));
    

    LIB proguard rules

    To use minifyEnabled build on your APP please add the following line to your proguard-rules.pro:

    -keep public interface com.axiros.axact.AXACTEvents {*;}
    

    Compiling on Android 6.0 (API level 23)

    Beginning in Android 6.0 (API level 23), users grant permissions to apps while it
    is running not when they install it. In order to request the required SDK permissions,
    please be sure to call the verifyServicePermission method before binding the
    service and implement the onRequestPermissionsResult. Its return value will tell
    where the bind can be made. A full sample can be found on MainActivity.

    @Override
    public void onRequestPermissionsResult(int requestCode, String permissions[],
                                           int[] grantResults)
    {
        /*
         * The option has already been chosen by the user. The bind can happen
         * here.
         */
    }
    
    @Override
    public void onCreate(Bundle savedInstanceState) {
        if (AxirosService..verifyServicePermission(MainActivity.this) == false) {
            /*
             * Permissions were already given by the user. The bind can happen
             * here.
             */
        }
    }
    

    Tested intensively on multiple devices at AWS Device Farm

    As it is compiled using Android NDK it can run on multiples android version.










    Visit original content creator repository
    https://github.com/axiros/AXACT.AndroidSample

  • laravel-exclusive-validation-rules

    Rules for ensuring that exactly 1 of n inputs is received

    Latest Version on Packagist Build Status Quality Score Total Downloads

    This package was born out of a need to ensure that for a set of inputs exactly 1 was received. Using existing validation rules could ensure that at least 1 or no more than 1 would be received, but there wasn’t a succinct way of guaranteeing exactly one.

    As such, two rules were implemented. The first, require_exclusive, ensures that exactly 1 of the inputs will be present. The second, optional_exclusive, allows for none of the inputs to be present but if any are present there must be exactly 1.

    Installation

    You can install the package via composer:

    composer require thedavefulton/laravel-exclusive-validation-rules

    The package is configured to use Laravel’s automatic discovery. However, you can manually register the service provider in the app/config.php file:

    'providers' => [
        // Other Service Providers
    
        Thedavefulton\ExclusiveValidationRulesServiceProvider::class,
    ],

    Usage

    These rules may be used like any standard validation rule.

    $attributes= $request->validate([
        'input1' => 'required_exclusive:input2',
        'input2' => 'required_exclusive:input1',
    ]);
    
    $attributes= $request->validate([
        'input1' => 'optional_exclusive:input2',
        'input2' => 'optional_exclusive:input1',
    ]);

    They can also be extended to n inputs

    $attributes= $request->validate([
        'input1' => 'required_exclusive:input2,input3,input4',
        'input2' => 'required_exclusive:input1,input3,input4',
        'input3' => 'required_exclusive:input1,input2,input4',
        'input4' => 'required_exclusive:input1,input2,input3',
    ]);

    Testing

    composer test

    Changelog

    Please see CHANGELOG for more information what has changed recently.

    Contributing

    Please see CONTRIBUTING for details.

    Security

    If you discover any security related issues, please email thedave@thedavefulton.com instead of using the issue tracker.

    Credits

    License

    The MIT License (MIT). Please see License File for more information.

    Laravel Package Boilerplate

    This package was generated using the Laravel Package Boilerplate.

    Visit original content creator repository https://github.com/thedavefulton/laravel-exclusive-validation-rules
  • ptucc_compiler

    build logo

    Introduction

    This is a compiler that converts an imaginary language ptuc (that bears quite
    a few similarities with Pascal) into C; ptuc full spec can be found here.
    This is done by using flex and bison.

    Features

    There are quite a lot of commonly sought features with decent implementation
    in this project, some of them are:

    • Modules (yes, that means includes)
    • Macro support (basic)
    • Nesting includes (up to a limit — avoids inf. circular includes)
    • Custom multiple flex input buffer management
    • Accurate line tracking across includes
    • Does not fail-fast (that means we don’t die @ first error).
    • Customize compiler using command line arguments

    That’s what comes to my mind right now, if you dig into the code I am sure
    you’ll find more.

    Requirements

    I assume that you will run this in a modern (unix-like) platform
    — this includes Linux and Mac OS, sorry Windows users. Here is also
    a more detailed dependency list:

    • recent Linux or Mac OS
    • gcc >= 4.7
    • GNU flex >= 2.6.0
    • GNU bison >= 3.0.4
    • GNU Make >= 4.1
    • valgrind >= 3.11 (optional(?))
    • git >= 2.7.4 (optional(?))

    Finally if you want to follow the tutorial on how this was made
    you are going to need a good text editor like vim,
    gedit or Sublime I have no real preference there just use
    what you are most comfortable with. Should you want to use an IDE
    I think you will find it really hard to set it up let alone have
    proper syntax highlighting. I personally use vim but I have
    tested gedit as well so both these editors will work fine
    as they have proper syntax highlighting already implemented.
    Sublime does not currently have good support for flex (.l)
    and bison (.y) files; it’s also a paid solution.

    Compiling ptucc

    After you ensure you are on a supported platform, have
    installed the required dependencies and successfully cloned this
    repo is to open a terminal inside the folder you just created
    and type:

    $ touch .depend; make all
    

    The default mode compiles the project in Debug mode without using
    optimizations; this can be changed if DEBUG flag is set to 0 at
    compile time.

    Compiling a .ptuc file

    The next step is to compile a .ptuc file; if you want to create your
    own files you will probably have to read the ptuc language definition
    which is located here. Alternatively, if you want to just execute
    the test or the example files you have two options, which are:

    Run the tests

    This is a fancy wrapper to just compile and run the sample001.ptuc file,
    hence all you have to do is to type in your console:

    $ make test
    

    Run the samples

    The other way of running the provided sample files is even more easy;
    you just have to type make and the filename like so:

    $ make filename
    

    So if you want to make sample005.ptuc you would do:

    $ make sample005
    

    The output might be a little different depending on your console spam
    settings but both ways would compile and execute the files. You will
    probably wonder why there are no warnings generated by gcc upon
    compiling the generated .c files (e.g. sample005.c). This is to
    reduce your console spam again and these warnings are expected as
    they stem from the way ptuc compiler generates C. If you want
    to see them regardless you will have to compile it with
    DEBUG_GEN_FILES=1; although, you can’t really do anything about
    them without fiddling with the code generation.

    Compile the file manually

    Should you want to compile the .ptuc file manually you can do so by
    following the instructions below.

    First you will have to compile the .ptuc file to its C equivalent
    representation:

    $ ./ptucc < infile.ptuc > outfile.c
    

    Then compile the .c file itself:

    $ make outfile
    

    Then execute it (if you wish):

    $ ./outfile
    

    ptucc arguments

    The console arguments supported by ptucc are the following:

    • -v: produces a more verbose output during parsing (can be used with any option).
    • -i infile.ptuc: specifies the input file, instead of the taking the file pipe’ed from stdin.
    • -o outfile.ptuc: specifies the output file.
    • -d depth: specifies the maximum number of flex input buffers that we can have.
    • -m macro_limit: specifies the number of hashtable bins (maximum macros are 4 times this value).
    • -h: prints up some usage patters.

    So for example this: ./ptucc -h produces this output:

    $ ./ptucc -h
    Example Usage:
      ./ptucc -v verbose output (can be used in any combination)
      ./ptucc -i [infile]
      ./ptucc -i [infile] -o [outfile]
      ./ptucc -i [infile] -o [outfile] -d [depth]
      ./ptucc -i [infile] -o [outfile] -d [depth] -m [macro_limit]
      ./ptucc -h (prints this)
      ./ptucc infile.ptuc
      ./ptucc < infile.ptuc > outfile.c
    

    Epilogue

    If you are here just to clone and submit a copy-pasta (you know probably who
    you are and why)… I would refrain you from doing so and point you to read
    the guide on how you can make something like this on your own (intro,
    starting stub, flex part, bison part). Additionally this
    version has some salts which add more functionality, so FYI that’s quite
    the giveaway. Hopefully this might encourage you to learn something new (and
    useful?) — I sure hope that’s the case, as this code polishing and write-up
    took a good three weeks++ of my spare time :).

    Visit original content creator repository
    https://github.com/andylamp/ptucc_compiler

  • thisisnotadinosaur

    This Is Not A Dinosaur

    Jurassic or Just-a-pic? Let our Dino-Detective decide!

    What’s this?!

    It’s more than just an app—it’s a glimpse into the prehistoric world, a journey through pixels and patterns to uncover the ancient giants that once roamed the Earth. In simple words, this app accepts an image from the user and detects if any Dinosaurs are present in that image.

    But why!

    For fun…and a bit more. I am a Machine Learning Engineer with 20 years of work experience. However, off late my search for a job has not proved to be quite fruitful. Doubts started creeping in and I started to question my ability as a coder, and I shied away from publishing any of the apps that I had created lately. And then one fine night while doom-scrolling through Reddit I found something interesting. This. It inspired me to start afresh, to start with something simple, something fun. And THIS IS NOT A DINOSAUR was born.

    Noice, isn’t it? What can you do?

    Simple. Let me know what do you think about this; whether you like it, hate it, had fun with it, broke it, anything. As Thor Odinson said, it would make me feel that “I’m still worthy!” You are also welcome to contribute if you want. And if you are feeling generous, consider donating so that I can continue to create more apps like this.

    Buy Me a Coffee at ko-fi.com

    Tech Stack

    • Python
    • Streamlit
    • Gemini Vision Pro

    Roadmap

    • – Add LangChain support
    • – Add support for WebP format
    • – Stop hallucination
    • – Add authentication

    Visit original content creator repository
    https://github.com/rajtilakjee/thisisnotadinosaur

  • thisisnotadinosaur

    This Is Not A Dinosaur

    Jurassic or Just-a-pic? Let our Dino-Detective decide!

    What’s this?!

    It’s more than just an app—it’s a glimpse into the prehistoric world, a journey through pixels and patterns to uncover the ancient giants that once roamed the Earth. In simple words, this app accepts an image from the user and detects if any Dinosaurs are present in that image.

    But why!

    For fun…and a bit more. I am a Machine Learning Engineer with 20 years of work experience. However, off late my search for a job has not proved to be quite fruitful. Doubts started creeping in and I started to question my ability as a coder, and I shied away from publishing any of the apps that I had created lately. And then one fine night while doom-scrolling through Reddit I found something interesting. This. It inspired me to start afresh, to start with something simple, something fun. And THIS IS NOT A DINOSAUR was born.

    Noice, isn’t it? What can you do?

    Simple. Let me know what do you think about this; whether you like it, hate it, had fun with it, broke it, anything. As Thor Odinson said, it would make me feel that “I’m still worthy!” You are also welcome to contribute if you want. And if you are feeling generous, consider donating so that I can continue to create more apps like this.

    Buy Me a Coffee at ko-fi.com

    Tech Stack

    • Python
    • Streamlit
    • Gemini Vision Pro

    Roadmap

    • – Add LangChain support
    • – Add support for WebP format
    • – Stop hallucination
    • – Add authentication

    Visit original content creator repository
    https://github.com/rajtilakjee/thisisnotadinosaur

  • survival-analysis

    survival-analysis

    data description

    HID	ID of the account
    Active	Whether the account is active (=1) or not (=0) 
    Rewards	whether the customer has a reward card (=1) or not (=0)
    Limit	credit limit of the customer
    numcard	number of cards that the customer has from this bank
    

    Mode of acquisition
    DM whether the customer was acquired though direct mail (1=Yes, 0=No)
    DS whether the customer was acquired though direct selling (1=Yes, 0=No)
    TS whether the customer was acquired though telephone selling (1=Yes, 0=No)
    NET whether the customer was acquired though internet (1=Yes, 0=No)

    Type of card
    Gold whether the customer has a GOLD card (1=Yes, 0=No)
    Platinum whether the customer has a PLATINUM card (1=Yes, 0=No)
    Quantum whether the customer has a QUANTUM card (1=Yes, 0=No)
    Standard whether the customer has a STANDARD card (1=Yes, 0=No)

    Profit profit generated by the customer over a 3 year period

    Totaltrans Total transaction amount (money spent) by the customer over a 3 year period

    Totfc Total finance charges paid by the customer over a 3 year period

    Age Age in years

    Dur Duration: Number of months a customer has stayed with the firm

    Types of Affinity cards sectorA No affinity – card is not associated with affinity to an organization
    SectorB Affinity card affiliated with Professional organization (e.g. Am. Medical. Assoc) if a customer has an affinity card of this type value =1 else 0.
    SectorC Affinity card affiliated with Sports
    SectorD Affinity card affiliated with Financial institution
    SectorE Affinity card affiliated with University
    SectorF Affinity card affiliated with Commercial (e.g. Macy’s card)

    Visit original content creator repository
    https://github.com/ravichrn/survival-analysis

  • nodejs-clusters-websocket-server

    SUMARY

    ENG:

    This server will scale as much as you need. It detects the number of cpus in your cluster and make the best out of what you have. It send messages between the processes with IPC Unix protocol so every process knows the global state of the application. When receiving a message it resend it to all processes that are logged in the subject that the connection sender is on. When receiving a message from other process it share it with all the websocket connections of that subject.

    PT:

    Este servidor irá escalar conforme houver necessidade. Ele detecta o número de cpus no seu cluster e se adapta da melhor forma possivel. Ele manda mensagens entre os processos criados com uma comunicação IPC unix para que cada processo saiba o estado global da aplicação. Quando receber uma mensagem, ele irá reenviar esta para todos os processos logados no canal em que a conexão de envio está. Quando receber uma mensagem de outro processo, ele ira enviar para suas conexões websocket do respectivo canal.

    HOW IT WORKS

    ENG:

    Once a conenction is made, the cluster sends a message to the master process, which will update all clusters at the server with the new global state of the application. Each cluster has it`s own application`s global state object and will update it when a message from master is recieved as shown bellow.

    PT:

    Uma vez feita uma conexão com um cluster , este enviará uma mensagem para o cluster master , o qual irá informar os demais do novo estado global da aplicação. Cada cluster tem seu próprio objeto representando o estado global da aplicação e o atualizará uma vez que receber uma mensagem do cluster master como mostrado abaixo.

    ENG:

    1) A new Connection is made with cluster 3.

    2) Cluster 3 send this information to master.

    3) Master resend it to all clusters.

    PT:

    1) Uma nova conexão é feita com o cluster 3.

    2) Cluster 3 manda essa informação para o master.

    3) Master reenvia informação para todos os clusters.

    CONNECT TO A CHANNEL

    ENG:

    To connect to a channel you must connect to the websocket ip and add the channel afterwards.

    Example: ws://ipFromServer:port/yourChannelName

    PT:

    Para conectar a um canal do servidor, você precisa acessar o ip do websocket e colocar o nome do seu canal após a barra.

    Exemplo: ws://ipDoServidor:porta/nomeDoCanal

    KILL A CHANNEL

    ENG:

    To kill a channel you must send an http delete like http://ipFromYourServer/delete/:channel.

    PT:

    Para apagar um canal do servidor, você precisa mandar um http delete para http://seuIpDoServidor/delete/:chanal .

    KILL CONNECTION

    ENG:

    To kill a connection you must send an http delete like http://ipFromYourServer/kill-connection/:channel/:connectionId/:workerID . Connection id and worker id where sent through websocket when the connection was stablished. When the data from server is deleted it will send a message to the connection saying the status of the connection is disconnected. Use this to cancel the connection from your cient side.

    PT:

    Para apagar uma conexão do servidor, você precisa mandar um http delete para http://seuIpDoServidor/kill-connection/:chanal/:connectionId/:workerID . Connection id e worker id foram enviados via websocket quando a conexão foi estabelecida. Quando os dados de conexão forem excluidos do servidor, ele enviará uma mensagem pela conexão websocket dizendo que o estado da conexão é disconnected. Use isto para matar a conexão no lado do cliente.

    MESSAGES

    ENG:

    All messages recieved and sent by the server are JSONs by default, so its easier to deal with them at the server side. When receiving a message from a connection, the cluster checks for all the other clusters that have the same connection`s channel and then resend the message directly to those clusters. When the message is recieved they post it to all websocket connections that have that same channel as the image bellow shows.

    PT:

    Todas as mensagens recebidas e enviadas pelo servidor estão em formato JSON para que fique mais facil a comunicação. Quando um cluster recebe uma mensagem por uma conexão, ele procura por todos os demais clusters que possuem uma conexão com o mesmo canal e reenvia diretamente essa mensagem para cada um. Uma vez que a mensagem é recebida, o cluster que a recebeu a enviará para todas as suas conexões websocket que possuirem esse canal como mostra a imagem abaixo.

    ENG:

    1) Message recieved from webscoket connection at cluster 4.

    2) Cluster 4 checks in its global aplication object for all clusters with the same channel.

    3) cluster 4 send the message to all clusters found with that channel.

    4) After recieving a message the cluster resend it through the websocket connection with that channel.

    PT:

    1) Mensagem recebida no cluster 4.

    2) Cluster 4 verifica por outros cluster com conexões ligadas ao mesmo canal.

    3) Cluster 4 envia a mensagem para todos os demais encontrados com o mesmo canal.

    4) Após receber a mensagem o cluster reenvia ela para suas conexões websocket conectadas ao canal.

    CUSTOMISE

    ENG:

    Here is where all the aplication`s public messages are recieved and delt with. If you want to add a functionality, just add it on the “commands” object and then add the command key on the message.

    PT:

    Aqui é onde todas as mensagens publicas da aplicação são recebidas e trabalhadas. Se quiser adicionar uma funcionabilidade, adicione ela no objeto commands e então adicione também uma chave command no objeto da mensagem.

    ENG:

    Here are all the one to many message functionalities between the intern processes are declared. If you want to add one just put it on the commands broadCastCommands bellow.

    PT:

    Aqui é onde todas as funcionabilidades internas de mensagens um para muitos são declaradas. Se quiser adicionar mais uma funcionabilidade, coloque sua função dentro do objeto broadCastCommands como as demais abaixo.

    ENG:

    Here is where all the functions our server provides are declared. You provide a key named “type” in the message sent so the object can find the corresponding function. That way we don`t have to create an additional condition every new function we add.

    WARNING!!!

    Don`t use “command”,”add”,”close” or “kill channel” as a “type” key! It is already an intern key used on this aplication

    PT:

    Aqui é onde se adiciona todas as funcionabilidades do servidor. Se fornece uma chave que será enviada junto com a mensagem pelo cliente e essa chave dará acesso á função correspondente. Desta forma, conseguimos com que não precise criar uma condicional nova para cada funcionabilidade acrescentada.

    ATENÇÃO!!!

    Não use “command”,”add”,”close” ou “kill channel” como valor da chave “type”! Elas ja estão sendo usadas em processos internos da aplicação.

    LAST CONSIDERATIONS

    ENG:

    There is a console log that shows when new connections are made or lost. It can affect directly in the server`s performance as the ammout of concurrent connections grow. Therefore you can delete or comment it for better results if needed. You can boost your node max ram usage to improove performance as well.

    PT:

    Existe um console log que mostra quando uma conexão é feita ou desfeita. Isso pode afetar diretamente a perfomance do servidor conforme a quantidade de conexões simultaneas cresce. Portanto pode-se comentar ou deletar esse console log para melhores resultados se necessario. Para melhor performance pode-se boostar o uso maximo de ram por node também.

    Visit original content creator repository https://github.com/felipegenef/nodejs-clusters-websocket-server