<response>
<title>REST-API for Notes Management Tool</title>
<info>You need an appropriate client to access this API</info>
<github>https://github.com/tbreuss/notes-client</github>
<url>https://notes.tebe.ch</url>
</response>
Build
composer build
Build a zip archive for production. Needs globally installed git and composer and an existing config/prod.env.php.
Runs the app in the development mode.
Open http://localhost:3000 to view it in your browser.
The page will reload when you make changes.
You may also see any lint errors in the console.
npm test
Launches the test runner in the interactive watch mode.
See the section about running tests for more information.
npm run build
Builds the app for production to the build folder.
It correctly bundles React in production mode and optimizes the build for the best performance.
The build is minified and the filenames include the hashes.
Your app is ready to be deployed!
See the section about deployment for more information.
npm run eject
Note: this is a one-way operation. Once you eject, you can’t go back!
If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.
Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.
You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.
OpenDNAS is a Open Source implementation of the production DNAS servers hosted by SCEI for authenticating Sony PlayStation clients to play multiplayer games.
On April 4, 2016; SCEI discontinued the official DNAS servers, thus forcefully taking down hundreds of multiplayer game titles with it.
OpenDNAS aims to be a solution to this, providing successful authentication for emulators and genuine PlayStations.
Requirements
nginx (DNAS does not work with HTTP/1.1 …)
OpenSSL 1.0.2i (or older, as long as it supports SSLv2).
Please do not run this application on a production system directly. This application requires OpenSSL 1.0.2i (SSLv2) to be compiled which is not secure anymore.
A NN-Based Cost Model for VPU Devices. For additional information about model setup and training, please refer this paper
If you find this work useful, please cite the following paper:
@article{DBLP:journals/corr/abs-2205-04586,
doi = {10.48550/ARXIV.2205.04586},
url = {https://arxiv.org/abs/2205.04586},
author = {Hunter, Ian Frederick Vigogne Goodbody and Palla, Alessandro and Nagy, Sebastian Eusebiu and Richmond, Richard and McAdoo, Kyle},
title = {Towards Optimal VPU Compiler Cost Modeling by using Neural Networks to Infer Hardware Performances},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
Setup
GCC version should be > 9. You can check your GCC version by running gcc --version and g++ --version
If you do not set CC and CXX environment variables, which gcc and which g++ are used by default.
Compile the library by typing cmake -H. -Bbuild && cmake --build build
Install oneAPI base Toolkit (instructions). oneAPI is massive so feel free to install only the Math Kernel Library library.
If you have troubles with proxy, please export no_proxy=127.0.0.1 in order to bypass any no_proxy env vs *.intel.com urls
To enable MKL you need to source this file /opt/intel/oneapi/setvars.sh to set the appropriate environment variables. Look here on how to get started with VSC
Select BLAS library
You can select which BLAS library to use (assume you have MKL installed) and the threading mode by using the following cmake variables
-DCBLAS_LIB=<value> (options: mkl for oneMKL and openblas for OpenBLAS)
-DMKL_THREADING=<value> (options: tbb for oneAPI Threading Building Blocks and sequential for no threading)
Using the cost model: C++
To use the VPUN cost model in a cmake project is quite simple. An example of a CMakeLists.txt file is shown below
The example folder contains few examples on how to build and use the cost model in a C++ project. The following list is a WIP of the supported example:
workload_mode_selection:
Selecting the optimal MPE mode for a VPU_2_0 workload
Choosing the optimal workload split strategy amound multiple ones
Using the cost model: Python
You can install the library by typing pip install .
Do this in a python virtual environment.
Cost models
Run the vpu_cost_model script to evaluate workloads from the command line
usage: vpu_cost_model [-h] --model MODEL [-t {cycles,power,utilization}] {VPU_2_7,VPU_4_0} ...
VPU cost model
positional arguments:
{VPU_2_7,VPU_4_0}
options:
-h, --help show this help message and exit
--model MODEL, -m MODEL
Model path
there are two possible VPU versions, each version has a DPU and DMA model. It is possible to bring up the help menu in the following ways:
optional arguments:
-h, --help show this help message and exit
--name NAME Model name
--output OUTPUT Output model (default model.vpunn)
VPUNN to JSON
Convert a VPUNN model into json for debugging purpose
usage: vpunn_to_json [-h] file
positional arguments:
file graphFile to deserialize to json OR an already deserialized json
optional arguments:
-h, --help show this help message and exit
Javascript (WASM) support
To compile the Web Assembly (WASM) version of the library, follow the steps below:
Configure Emscripten with cmake by typing emmake cmake ..
Build the Javascript interface emmake make vpunn_js -j
The build command produces an npm package that can be later installed in any js project by doing npm install <path to build folder>/dist/vpunn-*.tgz
Developer guide
Git hooks
All developers should install the git hooks that are tracked in the .githooks directory. We use the pre-commit framework for hook management. The recommended way of installing it is using pip:
pip install pre-commit
The hooks can then be installed into your local clone using:
pre-commit install --allow-missing-config
–allow-missing-config is an optional argument that will allow users to have the hooks installed and be functional even if using an older branch that does not have them tracked. A warning will be displayed for such cases when the hooks are ran.
If you want to manually run all pre-commit hooks on a repository, run pre-commit run --all-files. To run individual hooks use pre-commit run <hook_id>.
Uninstalling the hooks can be done using
pre-commit uninstall
Testing the library
Cost model test (C++)
Tests uses Google test suite for automatizing tests
To run the test suite: ctest --test-dir build/tests/cpp/
Example: running only cost model integration test: ./tests/cpp/test_cost_model
E2E Python test
pytest tests/python/test_e2e.py -v
WASM test
Assuming you build VPUNN WASM library in build_wasm, install VPUNN locally with all its dependencies.
To generate Code coverage report you need to enable it in CMake
cmake -DCMAKE_BUILD_TYPE=Coverage .. && make coverage -j
This commands generate a coverage folder into the build one with all the coverage information
Dependencies:
Gcov-9 and Gcovr tools are needed in order to generate the report
Only GCC is supported (no WASM/Visual Studio)
Notice about configurations not covered by training, or with greater errors.
NPU2.0
Not Available
NPU2.7
ISI=CLUSTERING + OWT=2 : replaced at runtime with SOK. runtime should be the same, no input halo used
Elementwise + ISI=SOK : replaced at runtime with clustering + owt=1, time is a little undervalued, but its the best approximation available
CM_CONV (compress convolution) + InputChannels=1
SOH (HALO) split with Kernel =1 has probably not been part of training, doesn’t make sense to have kernel=1 and input halo.NN predictions are problematic. : replaced at runtime with Clustering.
SOH Halo split , at least when H is small, K small, produces much bigger results than SOH Overlapped. This is not realistic, might be a NN limitation. See VPULayerCostModelTest.Unet_perf_SOH_SOK_after_SOHO
Output write tiles is limited to 2. EG also when used as mock for NPU4.0 where more than 2 tiles are present and used for split.
NPU2.7 splits by H with Halo were trained to NN using the memory tensor instead of the general rule for compute tensor (memory tensor is smaller with half a kernel in general). Calling NN with compute tensor introduces errors by reporting smaller values. To get corrected values (closer to Ground Truth) when generating the descriptor for NNs with interface 11 and SOH isi strategy, we are using not the input tensor, but a computed memory input tensor that mimics the one used at training
NPU4.0 (in development)
Reusing:when using the 2.7 trained version as mock please read the NPU2.7 section above.
DW_CONV (depthwise convolution)with kernel 3×3 is optimized in NPU4.0, but not in NPU2.7. The NN reported runtime is adjusted with a factor depending on datatype, channels and kernel size
Trained NN for 4.0:
WIP
Known problems:
NPU2.7: NN was not trained to discriminate the sporadic high runtime for swizzling. EISXW-98656 not solved (ELt wise add with big profiled CLUSTERING, but small SOH) Test: RuntimeELT_CONV_SOH_SOK_EISXW_98656.
Elementwise accepts (at NN run) SWizzling ON or OFF but has to be the same for all in/out/wts all 0 (OFF), all 5(ON) combinations not trained. To consider: training of NN with swizzlings combinations (profiling shows runtime is different)
SHAVE operators available
Shave version interface 1 (the old one) will be deleted in the near future, do not use it.
SHAVE v2 interface is active.
Details of any operator can be obtained by calling: ShaveOpExecutor::toString() method.
For most updated list of operators and their details see also the unit tests: TestSHAVE.SHAVE_v2_ListOfOperators, TestSHAVE.SHAVE_v2_ListOfOperatorsDetails_27,… .
For information about the profiled operators and extraparameters you can consult this document
Cost providers
The cost model is designed to be extensible. The cost providers are the classes that implement the cost model for a specific device. The cost providers are selected at runtime based on the device type. The following cost providers are available:
NN based cost provider – is a learned performance model.
Theoretical cost provider – is a simple mathematical model.
“Oracle” cost provider – a LUT of measured performance for specific workloads.
Profiled cost provider – it’s an http service that can be queried to get the measured performance of a specific workload.
Currently it supports only DPU costs and it can be configured using the following env. variables
ENABLE_VPUNN_PROFILING_SERVICE — TRUE to enable the profiling service
VPUNN_PROFILING_SERVICE_BACKEND — silicon to use the RVP for profiling, vpuem to use VPUEM as a cost provider.
VPUNN_PROFILING_SERVICE_HOST — address of the profiling service host, default is irlccggpu04.ir.intel.com
VPUNN_PROFILING_SERVICE_PORT — port of the profiling service, default is 5000
To see a list of all queried workloads and which cost provider was used for each, set the environment variable ENABLE_VPUNN_DATA_SERIALIZATION to TRUE.
This will generate a couple of csv files in the directory where vpunn is used.
Prosemble provides a fully reproducible development environment using Nix and devenv. Once you have installed Nix and devenv, you can do the following:
git clone https://github.com/naotoo1/prosemble.git
cd prosemble
Activate the reproducible development environment with:
devenv shell
You may optionally consider using direnv for automatic shell activation when entering the project directory.
To install Prosemble in development mode, follow these steps to set up your environment with all the necessary dependencies while ensuring the package is installed with live code editing capabilities. To run the local reproducible development environment, execute the command:
setup-python-env
To run Prosemble inside a reproducible Docker container, execute:
# Build the Docker container
create-cpu-container
# Run the container
run-cpu-container
When working with Prosemble in development mode, changes to the code take effect immediately without reinstallation. Use git pull to get the latest updates from the repository. Run tests after making changes to verify functionality
Citation
If you use Prosemble in your research, please cite it using the following BibTeX entry:
T6.1-tropical-glaciers – Environmental suitability model
Tropical Glacier Ecosystems are facing extreme pressure due to climate change and face imminent collapse in this century.
We explore here future projections of one direct and one indirect indicator of key ecosystem properties and use these to explore the probable trajectories toward collapse of the ecosystem. We evaluate the usefulness of relative severity and extent of degradation to anticipate collapse.
We discuss here details of the suggested formula for calculation of relative severity $RS$ and different approaches to summarise and visualise data across the extent of the ecosystem assessment unit.
We use the tropical glacier ecosystems as a model because:
risk of ecosystem collapse is very high and well documented
future probabilities of collapse can be projected from mechanistic models,
the different assessment units differ in extent: from the isolated glaciers in Indonesia and Venezuela to the highly connected of the Sierra Blanca in Peru.
We use projected degradation of climatic suitability because:
it is conceptually linked to models used to calculate probability of collapse
it uses the same underlying variables models and scenarios
we can explore different time frames (temporal scale of degradation)
we can explore uncertainty due to different models, scenarios and collapse thresholds
This repository includes all steps for fitting a environmental suitability model for tropical glacier ecosystems and compare the results with simulation results from a hybrid model of glacier ice mass balance and dynamics.
The repository has the following structure:
env folder
The workflow was developed using different computers (named terra, humboldt, roraima), but most of the spatial analysis has been done in Katana @ UNSW ResTech:
Katana. Published online 2010. doi:10.26190/669X-A286
This folder contains scripts for defining the programming environment variables for working in Linux/MacOS.
notes folder
Notes about the configuration and use of some features and repositories: OSF project management with R, using the quarto book project, running pbs jobs in katana, fitting GLMM with the glmmTMB package.
inc folder
Scripts used for specific tasks: R scripts for functions, tables and figures, quarto documents for publication appendices and PBS scripts for scheduling jobs in the HPC nodes in Katana.
docs-src folder
This contains the (quarto-) markdown documents explaining the steps of the workflow from the raw data to the end products.
Fingerprints (renames) files based on their content using MD5 hash-based cache busting file names.
Replaces references in files to original file names with their MD5 hash-based file names.
Optionally outputs a manifest file to buster.manifest.json.
Simple and intuitive configuration using .buster.json.
Invokable via the command line and scriptable.
Easily integrates into your project workflow.
Installation
Install Globally
This is the ideal solution if you want to use Buster as a general utility from the command line.
$ npm install -g @4awpawz/buster
Install Locally
This is the ideal solution if you want to integrate Buster into your project.
$ npm install --save-dev @4awpawz/buster
Important
Buster Is Destructive. Buster does not make backups of your files. Buster performs its operations directly on the files that operational directives indicate. See “A Typical Buster Workflow” below.
Versions prior to v1.1.0 generated hashes based solely on the content of the files targeted by its operational directives. This opened up the opportunity for clashes on files that had no content. To address this issue, beginning with v1.1.0, Buster will generates unique hashes for all files by including the path of the file targeted by operational directives as well as its content.
Buster Primer
Site Relative File Paths And Site Relative URLs
In the documentation that follows, references are made to site relative file paths and to site relative URLs.
“site relative file paths” pertain strictly to your project’s file structure. They are used to declare the input in operational directives when declaring the file paths to assets in your project that you want targeted by Buster for cache busting.
“Site relative URLs” pertain strictly to your website’s runtime environment and are used to reference assets throughout your site (e.g. the src attribute of an img tag, the href attribute of a link tag, the URL() CSS function declared inside of a CSS stylesheet).
The important thing here is to understand that in order for Buster to perform its cache busting you, the developer, must insure that your site employs site relative URLs when referencing its assets. This is because Buster converts your site relative file paths to site relative URLs which it then uses to search the content of your site’s files for site relative URLs that need to be updated to point to the assets it has fingerprinted with unique hashes.
A Typical Buster Work Flow
Your development build tool generates your production ready site (as opposed to development) into your project’s release folder. When configuring Buster to cache bust your site, you would target your project files in the release folder by using site relative file paths in your Buster configuration’s operational directives. Then from the root of your project you can use the command line to run Buster to cache bust your site in the release folder. You can then run your site from the release folder to insure that it is functioning as expected and once it is determined that it is functioning as expected you can then deploy your site directly from the release folder to its server using a command line utility such as rsync.
In a typical website project with the following or similar project structure
the site relative file path used in an operational directive to target housecat.jpg would be release/media/housecat.jpg and the site relative URL used to identify the image file in the browser would be media/housecat.jpg.
Operational Directives
Buster employs a concept called an Operational Directive, abbreviated od, which you declare in your .buster.json configuration file and which Buster uses to direct the operations it performs on your project’s files. Each od is comprised of 2 parts, an input, and an operation.
Input
A site relative file path to one or more files.
Supports globs/wildcard patterns.
Important Buster assumes that all site relative file paths are relative to process.cwd().
Important Buster implements its glob support using node package glob. Please refer to node package glob should you need additional information on using globs with Buster.
Operation
Indicates the actions that Buster is to perform on the od’s input file(s). It is a number preceded by a colon which separates the number from the input (e.g. “:1”). The following 3 operations are currently supported:
:1
Apply this operation only to those files whose own file names are to be fingerprinted for cache busting purposes (e.g. .jpg, .gif, .map).
The format of each unique MD5 hash-based file name will be [original file’s base name].[unique hash].[original file’s extension] (e.g. cat.[unique hash].jpg). Should the original file’s base name contain 1 or more periods (e.g. main.js.map) the format of the MD5 hash-based file name will, as an example, be main.[unique hash].js.map.
:2
Apply this operation only to those files whose contents are to be searched for site relative URLs that point to assets whose file names have been fingerprinted and therefor need to be updated and whose own file names are not to be fingerprinted for cache busting purposes (e.g. .html).
:3
Apply this operation only to those files whose own file names are to be fingerprinted for cache busting purposes and whose contents are to be searched for site relative URLs that point to assets whose file names have been fingerprinted and therefor need to be updated (e.g. .css).
Hashed File Name Format
The format of each unique MD5 hash-based file name will be [original file’s base name].[unique hash].[original file’s extension] (e.g. cat.[unique hash].jpg). Should the original file’s base name contain 1 or more periods (e.g. main.js.map) the format of the MD5 hash-based file name will, as an example, be main.[unique hash].js.map.
Operational Directive Examples
Example Operational Directives Using Site Relative File Path:
Given the following project structure
|- myproject
|- |- release/
|- |- |- media/
|- |- |- |- housecat.jpg
|- |- |- index.html => contains img tag with a site relative url for its src i.e. <img src="https://github.com/media/housecat.jpg">
|- |- .buster.json
and running Buster from the command line in the myproject folder with the following operational directives
This release only encompasses changes to the project’s README.md file, specifically for the addition of the solicitation to ‘Buy me a coffee’.
v1.1.1
This release only encompasses changes to the project’s documentation in this README.md file.
v1.1.0
This release includes an improved hashing algorithm that generates unique hashes for all files, including those that have no content.
v1.0.0
This is the first major release of Buster and incorporates many breaking changes from prior versions. Most notably, prior versions had a “safe mode” configuration option that would instruct Buster to cache bust “in place”, meaning that it would not create backups and would not be able to restore files to their prior state. As it turns out, the vast majority of Buster’s users are using “safe mode” because it fits their workflow of generating their site into a dedicated folder that can be cache busted and that could easily be repopulated by just regenerating the site. These changes were implemented to refactor Buster to precisely match this typical workflow.
v0.3.1
This release addresses fixes for security warnings for packages used internally by Buster only. There are no changes to the code base.
v0.3.0
This release addresses one bug and fixes for security warnings for packages used internally by Buster only. Also landing with this release is reduced console output; use the verbose config option if needed.
Major bug fixes:
Addresses issue #14 which could cause Buster to mangle hashed file names. Please note that beginning with this release, Buster now generates hashed file names as [hash]-[file name].[file extension]. You are strongly advised to upgrade your projects and rebuild them.
v0.2.4
This release addresses fixes for security warnings for packages used internally by Buster only. There are no changes to the code base.
v0.2.3
Major bug fixes:
Addresses issue #13 which would cause Buster to crash when reading a configuration file that doesn’t exist.
Addresses issue #12 which would cause Buster to crash when setting paramsConfig to a default value of {} to indicate that it wasn’t passed.
v0.2.2
This release includes no changes to the code base.
Addresses issue #11 which seeks to lockdown all project dependencies including descendants using NPM’s shrinkwrap.
v0.2.1
Major and minor bug fixes – includes but not limited to the following:
Addresses issue 10 which would cause buster to fail when reading command line configuration data belonging to the application that launched it with paramsConfig.
Addresses issue #9 which would sometimes cause restore to fail. This fix totally replaces the one introduced in v0.2.0, and now handles the issue earlier in the restore processing cycle.
v0.2.0
Major refactor – includes but not limited to the following:
A simple web scraper to fetch the latest programming language rankings from the TIOBE Index. The data is extracted using Bun, TypeScript, and Regular Expressions, then saved as JSON and YAML.
Features
Fetches the latest programming language rankings from TIOBE
Extracts rank, name, percentage, and change in ranking
A simple web scraper to fetch the latest programming language rankings from the TIOBE Index. The data is extracted using Bun, TypeScript, and Regular Expressions, then saved as JSON and YAML.
Features
Fetches the latest programming language rankings from TIOBE
Extracts rank, name, percentage, and change in ranking