Commit Graph

60 Commits

Author SHA1 Message Date
Hamish Coleman
ae502d9181
JSON Reply Management API - feature parity with old management interfaces (#861)
* Ensure that recent code additions pass the linter

* Include some of the more obviously correct lint fixes to edge_utils.c

* Refactor edge JSON api into its own source file

* Use shorter names for static management functions

* Implement a JSON RPC way of managing the verbosity

* Tidy up help display in n2nctl script

* Make note of issue with implementing the stop command

* Implement a JSON RPC call to fetch current community

* Make n2nhttpd time value be more self-contained

* Make n2nhttpd order more closely match the existing management stats output

* Wire up status page to the verbosity setting

* Add JSON versions of the remainder of the edge management stats

* Add new file to cmake

* Properly define management handler

* Only update the last updated timestamp after a successful data fetch

* Function and types definition cleanup

* Force correct type for python scripts mgmt port

* Implement initial JSON API for supernode

* Fix whitespace error

* Use helper function for rendering peers ip4 address

* Proxy the auth requirement back out to the http client, allowing normal http auth to be used

* Ensure that we do not leak the federation community

* Use the same rpc method name and output for both edge and supernode for peers/edges

* Allow n2nctl to show raw data returned without resorting to tricks

* Make n2nctl pretty printer understandable with an empty table

* Use the full name for supernodes RPC call

* Use same RPC method name (but some missing fields) for getting communities from both edge and supernode

* Add *_sup_broadcast stats to edge packet stats output

* Refacter the stats into a packetstats method for supernode RPC

* Even if I am not going to prettyprint the timestamps, at least make all the timestamps on the page the same unit

* Simplify the RPC handlers by flagging some as writable and checking that in the multiplexer

* Remove invalid edges data

* Avoid crash on bad data to verbose RPC

* Avoid showing bad or inconsistant protocol data in communities RPC

* Minor clarification on when --write is handled

* Make linter happy

* Fix changed method name in n2nhttpd

* Move mainloop stop flag into the n2n_edge_t structure, allowing access from management commands

* Implement edge RPC stop command

* Move mainloop stop flag into the n2n_sn_t structure, allowing access from management commands

* Implement supernode RPC stop command

* Allow multiple pages to be served from mini httpd

* Extract common script functions into a separate URL

* Handle an edge case in the python rpc class

With a proper tag-based demultiplexer, this case should be a nop,
but we are single-threaded and rely on the packet ordering in this
library.

* Add n2nhttpd support to query supernode using urls prefixed with /supernode/

* Handle missing values in javascript table print

* Add another less filtering javascript key/value renderer

* Add a supernode.html page to the n2nhttpd

* Address lint issue

* Mention the second html page on the Scripts doc

* Remove purgable column from supernode edges list - it looks like it is rarely going to be set

* Add a simple one-line example command at the top of the API documentation

* Acknowledge that this is not the most efficient protocol, but point out that it was not supposed to be

* Make it clear that the n2nctl script works for both edge and supernode

* Fight with inconsistant github runner results

* Turn off the /right/ coverage generator
2021-10-23 11:05:05 +05:45
Hamish Coleman
bb3de5698c
added JSON interfaces to edge management port and scripts to further process output (#854)
* Add management commands to show data in JSON format

* Add a script to query the JSON management interface

* Suprisingly, the github runner does not have flake8 installed

* Add n2nctl debugging output to show the raw data received from the JSON

* Ensure well known tag wrap-around semantics

* Try to ensure we check every edge case in the protocol handling - only valid packets are allowed

* Add a very simple http to management port gateway

* Fix the lint issue
2021-10-16 00:11:39 +05:45
Hamish Coleman
1670b14d69
reduced the number of artifacts to a reasonable number (#853)
* Consolidate all binaries into one artifact bucket

* Remove unused variables from cmake matrix

* Consolidate dpkg and rpm packages into one bucket each

* Consolidate all the coverage reports into one bucket

* Consolidate all the test outputs into one bucket

* Avoid the artifact prefix removal using a simple hack, but upload the expected test results as a consequance
2021-10-15 20:19:52 +05:45
Hamish Coleman
c3c72e2656
test on all available runner environments and add autogenerated crossbuilt dpkg packages (#852)
* Make test workflow smoke test use the same internal name as descriptive name

* Refactor workflow to be test_os then build for that OS

* Run tests on all available github runner environments

* Ensure that dpkg builds will fail if the compile fails

* Allow explicitly overriding the debian package architecture

* Pass the detected architecture into the dpkg build process

* Use the possibly overridden MACHINE variable to calculate the short machine name

* Remove unused variable

* Remove unused AC_SUBST

* Allow EXTN to be overridden instead of MACHINE

* Add crossbuilding for dpkg builds

* Ubnuts dont got no crossbuild for mips

* Use the correct value for EXTN
2021-10-14 15:54:19 +05:45
Hamish Coleman
f5b730baed
added some cross-compiled binary outputs to the autobuild (#850)
* Add an example cross compile build

* Harmonise the naming to reflect full architecture and if it is a real package or not

* Add some more example cross compile targets

* Only one RPM package is created, so use the singular word

* Dont use build triplet for OS packages, use the OS prefered arch name

* Add some cross compiling documentation to the Building.md
2021-10-14 02:30:42 +05:45
Hamish Coleman
7d4ff08200
added automated binary artifacts (#849)
* Allow an autobuilder with no access to private key material to create testable packages

* Initial dpkg build - will need helpers installed to work

* Start adding required dpkg helpers

* Tweak package artifact names

* Add a windows 'package' builder

* Ensure prefix path handling deals with current directory change when descending to tools dir

* The tools makefile currently only needs the SBINDIR path to install properly

* Add a macos 'package' builder

* Remove unused configure variables

* Without commit history, some of the automatic version numbering will fail

* Add an rpm builder

* Need to set the env var for the rpm build before we change our working dir

* Allow gpg signing to fail for generating test rpm packages

* Unfortunately the rpm spec file hardcodes some path assumptions, so we need to use hacks to work around them

* Return to the top dir before moving things around

* A small change to make actions re-run the pipeline

* Name this workflow file with a nicer looking name
2021-10-11 18:44:28 +05:45
Hamish Coleman
4438f1aa2a
added mingw test platform (#829)
* Provide a minimal reimplementation of our autoconf, to try windows builds

* Try building with windows

* Fix thinko in spelling

* Ensure shell script runs inside a shell

* Add a hack to aid include discovery

* Just keep adding tech debt...

* Assume that we will have slashes in some of the replacement strings and avoid that char with sed

* Restore one slash

* Hack around the tools makefile interdependancy bug

* A correct cflags include hack for each compile dir

* Ensure we link against winsock (note, even though this says 32bit, it should link the 64bit library ... I think)

* Bad link ordering if we dont use LDLIBS

* Remove unused make variable

* Remove makefile duplication using inheritance (this does mean you can no longer cd tools; make, but must do make tools)

* Add missing library for win32

* Show OS variable

* Make hack autoconf more robust for tests on non gitlab runners

* Remove no longer used substitutions from hack autoconf

* Add missing include path to tools under win32

* Build the win32 subdir when the compiler is Msys

* The different subdirs have different dependancies

* Ensure we can find the include files

* Fix library link ordering

* Ensure the tools dir can find the special win32 lib

* Deal with the differing basic type sizes on both linux/64bit and windows/64bit

* Document the steps to mimic the github windows/mingw build locally - to allow for simpler debugging

* Ensure branch name in instructions matches my test branch name

* Clarify the shell needed to build with mingw

* Since the makefile depends on knowing the OS, raise a fatal error if we cannot determine this

* Handling different compile environments is hard.

- Linux: sane and reasonable results for both uname -s (=Linux) and
  uname -o (=GNU/Linux)
- Windows/Mingw: insane results for uname -s
  (=MSYS_NT-$MAJOR.$MINOR-$BUILDNR) but sane results for uname -o (Msys)
- Macos: sane results for uname -s (=Darwin) but does not support
  uname -o at all

* Revamp the way that Mingw is detected

* Avoid attempting to generate gcovr report when running under windows

* Whoops, isolate the right step

* Fix spelling mistake

* win32/Makefile: Remove unused setting and add comment

* ensure that all win32 includes use the same expected path

* Allow simpler cross compilation by letting configure pass the CC and AR environment through

* Avoid multiple '_CRT_SECURE_NO_WARNINGS redefined' warnings

* Convert to a consolidated CONFIG_TARGET variable to select any different compile options

* Use the more generic printf defines to avoid warnings on mingw

* Update mingw build docs

* English better for reader happy make

* Address a number of mingw compiler warnings

* Fix Visual C compile

* Be sure to document some of the hacky nature of the mingw build
2021-10-06 00:52:15 +05:45
Hamish Coleman
dec1771d5f
added test platform MacOS (#828)
* Just for a laugh, lets naively throw the same build at non linux OS

* Only run apt commands on linux (yes, this is not actually right, but it is close enough for github actions at the moment)

* Start installing required macOS packages

* Only run apt commands on linux #2

* Ensure that we use a Bourne shell, even on Windows

* Until it is clear how to install autotools on windows in github runners, avoid fighting that bear

* Only try to run gcovr on ubuntu-latest

* Install the right macos dep

* Install gcovr on macos and upload all coverage report artifacts

* Upload a generated tests output artifact, even if the tests failed

* Prepend a quick smoke test to the full matrix and coverage builds

* Use short names for jobs
2021-09-29 16:51:02 +05:45
Hamish Coleman
b735ad6b9e
added test framework and code coverage reporting (#797)
* Add a simple test framework

* Add a code coverage report example oneliner

* Move the coverage report into a separate directory

* Add a github action to run tests and publish a branch with the coverage report

* Fix: Missing job separator

* Fix: remember to actually run configure

* Fix: Gotta autogen before I configure

* Dont try to upload coverage report unless this is a push

* Clearly show the git ref tested in the coverage report

* Add a test for the various transforms

* Add tests for the elliptic curve and pearson hash

* Ensure we ignore new generated output

* Remove unneeded boilerplate from the compression tests

* Add an example of a test of the encoded wire packets

* Ensure that correctly testable data is output even when zstd is not compiled

* Factor test runner out into its own script and attempt to add it to the cmake file

* Tell cmake about a new object file

* Stop trying to make Cmake work...

* Stop trying to make cmake work, round 2

* In the middle of a thousand lines of cmake output was one important one - windows could not find assert() - try again

* Try again to plumb the tests into cmake

* Add missing library to our superset install line

* Fix build error when libcap-dev is installed

* Switch to using artifact uploads instead of pages to store/show the coverage report

* Fix artifact upload yaml

* Upload coverage report to codecov

* Fix codecov - clearly it doesnt do a recursive search for coverage files

* Fix codecov - my hopeful use of a list of directories didnt work

* Fix codecov - unfortunately, it doesnt just consume the coverage data and needs us to generate the gcov output

* Fix codecov - nope, it still doesnt recursively search

* Fix codecov - it really helps if I run the gcov data generator

* Add a simple matrix build

* Fix older ubuntu versions of gcovr that do not support the '--html-title' option

* Ensure we use gcover options that are identical on older ubuntu

* Improve coverage generation and required build packages
2021-09-27 15:26:06 +05:45
Sven Roederer
90c2364b6d
CI: build on Linux, Windows and MacOS via GitHubAction (#679)
* create GitHubAction to build via cmake on ubuntu

make just with the assistant

* GHA/cmake: build via matrix for different OS

* build for Linux-x86, Linux-arm and MacOS
* code taken from https://github.community/t/create-matrix-with-multiple-os-and-env-for-each-one/16895/6

* GHA: add a build on Windows
2021-04-05 19:29:08 +02:00