Blog

  • notes-api

    Notes Management Tool – REST-API

    Yii

    The REST-API for the Notes Management Tool.

    Note: This is the API only. You need the appropriate client which is hosted at https://github.com/tbreuss/notes-client.

    Install

    git clone https://github.com/tbreuss/notes-api.git
    cd notes-api
    composer install
    

    Create/import database

    Create a database at your hosting provider and import config\mysql-dump.sql.

    Config

    Copy configuration files:

    cd config
    cp db.dist.php db.php
    cp params.dist.php params.php
    

    Edit both files according to your config settings.

    Run

    cd notes-server
    php yii serve -p 8888
    

    Open your webbrowser http://localhost:8888/

    You should see:

    <response>
        <title>REST-API for Notes Management Tool</title>
        <info>You need an appropriate client to access this API</info>
        <github>https://github.com/tbreuss/notes-client</github>
        <url>https://notes.tebe.ch</url>
    </response>
    

    Build

    composer build
    

    Build a zip archive for production. Needs globally installed git and composer and an existing config/prod.env.php.

    Endpoints

    To be done.

    GET v1/ping

    POST v1/login

    GET v1/articles/id:\d+

    PUT v1/articles/id:\d+

    DELETE v1/articles/id:\d+

    GET v1/articles

    POST v1/articles

    GET v1/articles/latest

    GET v1/articles/liked

    GET v1/articles/modified

    GET v1/articles/popular

    POST v1/articles/upload

    GET v1/users/id:\d+

    GET v1/users

    GET v1/tags/id:\d+

    GET v1/tags

    GET v1/tags/selected

    cURL calls

    To be done.

    Post login

    curl -i --header "Content-Type: application/json" --request POST --data '{"username":"xyz","password":"xyz"}' http://localhost:8888/v1/login
    curl -i --header "Content-Type: application/json" --request OPTIONS --data '{"username":"xyz","password":"xyz"}' http://localhost:8888/v1/login
    curl -i -X OPTIONS http://localhost:8888/ --header "Content-Type: application/json"
    
    Visit original content creator repository https://github.com/tbreuss/notes-api
  • url_shortening

    Getting Started with Create React App

    This project was bootstrapped with Create React App.

    Available Scripts

    In the project directory, you can run:

    npm start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in your browser.

    The page will reload when you make changes.
    You may also see any lint errors in the console.

    npm test

    Launches the test runner in the interactive watch mode.
    See the section about running tests for more information.

    npm run build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    See the section about deployment for more information.

    npm run eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    Learn More

    You can learn more in the Create React App documentation.

    To learn React, check out the React documentation.

    Code Splitting

    This section has moved here: https://facebook.github.io/create-react-app/docs/code-splitting

    Analyzing the Bundle Size

    This section has moved here: https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size

    Making a Progressive Web App

    This section has moved here: https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app

    Advanced Configuration

    This section has moved here: https://facebook.github.io/create-react-app/docs/advanced-configuration

    Deployment

    This section has moved here: https://facebook.github.io/create-react-app/docs/deployment

    npm run build fails to minify

    This section has moved here: https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify

    Visit original content creator repository
    https://github.com/iammmila/url_shortening

  • OpenDNAS

    This project is no longer being worked on because nginx only supports HTTP/1.1 which does not work with DNAS (DNAS requires HTTP/1.0)

    Use clank-dnas instead.

    OpenDNAS

    An Open Source replacement DNAS server.

    What is OpenDNAS

    OpenDNAS is a Open Source implementation of the production DNAS servers hosted by SCEI for authenticating Sony PlayStation clients to play multiplayer games.

    On April 4, 2016; SCEI discontinued the official DNAS servers, thus forcefully taking down hundreds of multiplayer game titles with it.

    OpenDNAS aims to be a solution to this, providing successful authentication for emulators and genuine PlayStations.

    Requirements

    • nginx (DNAS does not work with HTTP/1.1 …)
    • OpenSSL 1.0.2i (or older, as long as it supports SSLv2).
    • php7.0.15-fpm (mcrypt_encrypt removed in 7.2).

    Installation

    Please do not run this application on a production system directly. This application requires OpenSSL 1.0.2i (SSLv2) to be compiled which is not secure anymore.

    Instead use a container. Such as clank-dnas.

    A sample nginx.vhost has been provided.

    • The certs/ directory should become /etc/nginx/certs.
    • The public/ directory should become /var/www/OpenDNAS/public.
    • The nginx.vhost file should be configured, added to /etc/nginx/sites-available, and then linked to /etc/nginx/sites-enabled.
    • You will need to generate your own SSL cert for opendnas.localhost.

    Visit original content creator repository
    https://github.com/hashsploit/OpenDNAS

  • npu-nn-cost-model

    VPUNN cost model

    A NN-Based Cost Model for VPU Devices. For additional information about model setup and training, please refer this paper

    If you find this work useful, please cite the following paper:

    @article{DBLP:journals/corr/abs-2205-04586,
      doi = {10.48550/ARXIV.2205.04586},
      url = {https://arxiv.org/abs/2205.04586},
      author = {Hunter, Ian Frederick Vigogne Goodbody and Palla, Alessandro and Nagy, Sebastian Eusebiu and Richmond, Richard and McAdoo, Kyle},
      title = {Towards Optimal VPU Compiler Cost Modeling by using Neural Networks to Infer Hardware Performances},
      publisher = {arXiv},
      year = {2022},
      copyright = {arXiv.org perpetual, non-exclusive license}
    }
    

    Setup

    GCC version should be > 9. You can check your GCC version by running gcc --version and g++ --version

    If you do not set CC and CXX environment variables, which gcc and which g++ are used by default.

    Compile the library by typing cmake -H. -Bbuild && cmake --build build

    @TODO: environment compatible with newer compiler versions (gcc>=10, clamg >10 )

    Use Intel oneAPI MKL

    Install oneAPI base Toolkit (instructions). oneAPI is massive so feel free to install only the Math Kernel Library library.

    If you have troubles with proxy, please export no_proxy=127.0.0.1 in order to bypass any no_proxy env vs *.intel.com urls

    To enable MKL you need to source this file /opt/intel/oneapi/setvars.sh to set the appropriate environment variables. Look here on how to get started with VSC

    Select BLAS library

    You can select which BLAS library to use (assume you have MKL installed) and the threading mode by using the following cmake variables

    • -DCBLAS_LIB=<value> (options: mkl for oneMKL and openblas for OpenBLAS)
    • -DMKL_THREADING=<value> (options: tbb for oneAPI Threading Building Blocks and sequential for no threading)

    Using the cost model: C++

    To use the VPUN cost model in a cmake project is quite simple. An example of a CMakeLists.txt file is shown below

    include_directories(${CMAKE_BINARY_DIR}/include)
    include_directories(${FLATBUFFERS_SRC_DIR}/include)
    
    ...
    
    target_link_libraries(<your exe or lib> inference)

    The following example code explains how to instantiate the cost model and how to run a simple query for a 3x3s1 convolution

    #include "vpu_cost_model.h"
    
    auto model = VPUNN::VPUCostModel(model_path);
    
    auto dpu_cycles = model.DPU({VPUNN::VPUDevice::VPU_2_7,
                                 VPUNN::Operation::CONVOLUTION,
                                 {VPUNN::VPUTensor(56, 56, 16, 1, VPUNN::DataType::UINT8)}, // input dimensions
                                 {VPUNN::VPUTensor(56, 56, 16, 1, VPUNN::DataType::UINT8)}, // output dimensions
                                 {3, 3}, //kernels
                                 {1, 1}, //strides
                                 {1, 1}, //padding
                                 VPUNN::ExecutionMode::CUBOID_16x16} // execution mode
                                );

    The example folder contains few examples on how to build and use the cost model in a C++ project. The following list is a WIP of the supported example:

    • workload_mode_selection:
      • Selecting the optimal MPE mode for a VPU_2_0 workload
      • Choosing the optimal workload split strategy amound multiple ones

    Using the cost model: Python

    You can install the library by typing pip install .

    Do this in a python virtual environment.

    Cost models

    Run the vpu_cost_model script to evaluate workloads from the command line

    usage: vpu_cost_model [-h] --model MODEL [-t {cycles,power,utilization}] {VPU_2_7,VPU_4_0} ...
    
    VPU cost model
    
    positional arguments:
      {VPU_2_7,VPU_4_0}
    
    options:
      -h, --help            show this help message and exit
      --model MODEL, -m MODEL
                            Model path

    there are two possible VPU versions, each version has a DPU and DMA model. It is possible to bring up the help menu in the following ways:

    vpu_cost_model VPU_2_7 DPU -h
    vpu_cost_model VPU_2_7 DMA -h
    vpu_cost_model VPU_4_0 DPU -h
    vpu_cost_model VPU_4_0 DMA -h
    

    minimal example usage:

    vpu_cost_model VPU_2_7 DPU -o CONVOLUTION --inch 64 --outch 64 --height 16 --width 16 --kh 3 --kw 3 --indt UINT8 --outdt UINT8 --mpe-mode CUBOID_16x16
    vpu_cost_model VPU_2_7 DMA -l 1024 --sw 1024 --dw 1024 -d DDR2CMX
    vpu_cost_model VPU_4_0 DPU -o CONVOLUTION --inch 64 --outch 64 --height 16 --width 16 --kh 3 --kw 3 --indt UINT8 --outdt UINT8 --mpe-mode CUBOID_16x16
    vpu_cost_model VPU_4_0 DMA 1024 --sw 1024 --dw 1024 -d DDR2CMX
    

    VPUNN builder

    Generate a VPUNN model from a tensorflow one

    optional arguments:
      -h, --help       show this help message and exit
      --name NAME      Model name
      --output OUTPUT  Output model (default model.vpunn)

    VPUNN to JSON

    Convert a VPUNN model into json for debugging purpose

    usage: vpunn_to_json [-h] file
    
    positional arguments:
      file        graphFile to deserialize to json OR an already deserialized json
    
    optional arguments:
      -h, --help  show this help message and exit

    Javascript (WASM) support

    To compile the Web Assembly (WASM) version of the library, follow the steps below:

    1. Install Emscripten (link here)
    2. Configure Emscripten with cmake by typing emmake cmake ..
    3. Build the Javascript interface emmake make vpunn_js -j

    The build command produces an npm package that can be later installed in any js project by doing npm install <path to build folder>/dist/vpunn-*.tgz

    Developer guide

    Git hooks

    All developers should install the git hooks that are tracked in the .githooks directory. We use the pre-commit framework for hook management. The recommended way of installing it is using pip:

    pip install pre-commit

    The hooks can then be installed into your local clone using:

    pre-commit install --allow-missing-config

    –allow-missing-config is an optional argument that will allow users to have the hooks installed and be functional even if using an older branch that does not have them tracked. A warning will be displayed for such cases when the hooks are ran.

    If you want to manually run all pre-commit hooks on a repository, run pre-commit run --all-files. To run individual hooks use pre-commit run <hook_id>.

    Uninstalling the hooks can be done using

    pre-commit uninstall

    Testing the library

    Cost model test (C++)

    Tests uses Google test suite for automatizing tests
    To run the test suite: ctest --test-dir build/tests/cpp/

    Example: running only cost model integration test: ./tests/cpp/test_cost_model

    E2E Python test

    pytest tests/python/test_e2e.py -v

    WASM test

    Assuming you build VPUNN WASM library in build_wasm, install VPUNN locally with all its dependencies.

    npm install --prefix tests/js
    npm install --save-optional build_wasm/dist/vpunn-*.tgz --prefix tests/js

    Start testing by running

    npm run test --prefix=tests/js

    Code coverage

    To generate Code coverage report you need to enable it in CMake

    cmake -DCMAKE_BUILD_TYPE=Coverage  .. && make coverage -j

    This commands generate a coverage folder into the build one with all the coverage information

    Dependencies:

    • Gcov-9 and Gcovr tools are needed in order to generate the report
    • Only GCC is supported (no WASM/Visual Studio)

    Notice about configurations not covered by training, or with greater errors.

    NPU2.0

    Not Available

    NPU2.7

    • ISI=CLUSTERING + OWT=2 : replaced at runtime with SOK. runtime should be the same, no input halo used

    • Elementwise + ISI=SOK : replaced at runtime with clustering + owt=1, time is a little undervalued, but its the best approximation available

    • CM_CONV (compress convolution) + InputChannels=1

    • SOH (HALO) split with Kernel =1 has probably not been part of training, doesn’t make sense to have kernel=1 and input halo.NN predictions are problematic. : replaced at runtime with Clustering.

    • SOH Halo split , at least when H is small, K small, produces much bigger results than SOH Overlapped. This is not realistic, might be a NN limitation. See VPULayerCostModelTest.Unet_perf_SOH_SOK_after_SOHO

    • Output write tiles is limited to 2. EG also when used as mock for NPU4.0 where more than 2 tiles are present and used for split.

    • NPU2.7 splits by H with Halo were trained to NN using the memory tensor instead of the general rule for compute tensor (memory tensor is smaller with half a kernel in general). Calling NN with compute tensor introduces errors by reporting smaller values. To get corrected values (closer to Ground Truth) when generating the descriptor for NNs with interface 11 and SOH isi strategy, we are using not the input tensor, but a computed memory input tensor that mimics the one used at training

    NPU4.0 (in development)

    Reusing:when using the 2.7 trained version as mock please read the NPU2.7 section above.

    • DW_CONV (depthwise convolution)with kernel 3×3 is optimized in NPU4.0, but not in NPU2.7. The NN reported runtime is adjusted with a factor depending on datatype, channels and kernel size
      Trained NN for 4.0:
    • WIP

    Known problems:

    • NPU2.7: NN was not trained to discriminate the sporadic high runtime for swizzling. EISXW-98656 not solved (ELt wise add with big profiled CLUSTERING, but small SOH) Test: RuntimeELT_CONV_SOH_SOK_EISXW_98656.
      Elementwise accepts (at NN run) SWizzling ON or OFF but has to be the same for all in/out/wts all 0 (OFF), all 5(ON) combinations not trained. To consider: training of NN with swizzlings combinations (profiling shows runtime is different)

    SHAVE operators available

    Shave version interface 1 (the old one) will be deleted in the near future, do not use it.
    SHAVE v2 interface is active.

    Details of any operator can be obtained by calling: ShaveOpExecutor::toString() method.

    For most updated list of operators and their details see also the unit tests: TestSHAVE.SHAVE_v2_ListOfOperators, TestSHAVE.SHAVE_v2_ListOfOperatorsDetails_27,… .

    For information about the profiled operators and extraparameters you can consult this document

    Cost providers

    The cost model is designed to be extensible. The cost providers are the classes that implement the cost model for a specific device. The cost providers are selected at runtime based on the device type. The following cost providers are available:

    • NN based cost provider – is a learned performance model.
    • Theoretical cost provider – is a simple mathematical model.
    • “Oracle” cost provider – a LUT of measured performance for specific workloads.
    • Profiled cost provider – it’s an http service that can be queried to get the measured performance of a specific workload.
      • Currently it supports only DPU costs and it can be configured using the following env. variables
        • ENABLE_VPUNN_PROFILING_SERVICETRUE to enable the profiling service
        • VPUNN_PROFILING_SERVICE_BACKENDsilicon to use the RVP for profiling, vpuem to use VPUEM as a cost provider.
        • VPUNN_PROFILING_SERVICE_HOST — address of the profiling service host, default is irlccggpu04.ir.intel.com
        • VPUNN_PROFILING_SERVICE_PORT — port of the profiling service, default is 5000

    To see a list of all queried workloads and which cost provider was used for each, set the environment variable ENABLE_VPUNN_DATA_SERIALIZATION to TRUE.
    This will generate a couple of csv files in the directory where vpunn is used.

    Visit original content creator repository
    https://github.com/intel/npu-nn-cost-model

  • prosemble

    Prosemble

    Python 3.8 GitHub Version PyPI Version License: MIT

    Overview

    Prosemble is a Python library for prototype-based machine learning models.

    Installation

    Prosemble can be installed using pip:

    pip install prosemble

    If you have installed Prosemble before and want to upgrade to the latest version, you can run the following command in your terminal:

    pip install -U prosemble

    To install the latest development version directly from the GitHub repository:

    pip install git+https://github.com/naotoo1/prosemble

    Development Environment

    Prosemble provides a fully reproducible development environment using Nix and devenv. Once you have installed Nix and devenv, you can do the following:

    mkdir -p ~/.config/nix
    echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf
    nix profile install --accept-flake-config "github:cachix/devenv/latest"

    Then clone and enter the project directory:

    git clone https://github.com/naotoo1/prosemble.git
    cd prosemble

    Activate the reproducible development environment with:

    devenv shell

    You may optionally consider using direnv for automatic shell activation when entering the project directory.

    To install Prosemble in development mode, follow these steps to set up your environment with all the necessary dependencies while ensuring the package is installed with live code editing capabilities. To run the local reproducible development environment, execute the command:

    setup-python-env

    To run Prosemble inside a reproducible Docker container, execute:

    # Build the Docker container
    create-cpu-container
    # Run the container 
    run-cpu-container

    When working with Prosemble in development mode, changes to the code take effect immediately without reinstallation. Use git pull to get the latest updates from the repository. Run tests after making changes to verify functionality

    Citation

    If you use Prosemble in your research, please cite it using the following BibTeX entry:

    @misc{Otoo_Prosemble_2022,
      author       = {Otoo, Nana Abeka},
      title        = {Prosemble},
      year         = {2022},
      publisher    = {GitHub},
      journal      = {GitHub repository},
      howpublished = {\url{https://github.com/naotoo1/Prosemble}},
    }
    Visit original content creator repository https://github.com/naotoo1/prosemble
  • T6.1-tropical-glaciers-suitability-model

    T6.1-tropical-glaciers – Environmental suitability model

    Tropical Glacier Ecosystems are facing extreme pressure due to climate change and face imminent collapse in this century.

    We explore here future projections of one direct and one indirect indicator of key ecosystem properties and use these to explore the probable trajectories toward collapse of the ecosystem. We evaluate the usefulness of relative severity and extent of degradation to anticipate collapse.

    We discuss here details of the suggested formula for calculation of relative severity $RS$ and different approaches to summarise and visualise data across the extent of the ecosystem assessment unit.

    We use the tropical glacier ecosystems as a model because:

    • risk of ecosystem collapse is very high and well documented
    • future probabilities of collapse can be projected from mechanistic models,
    • the different assessment units differ in extent: from the isolated glaciers in Indonesia and Venezuela to the highly connected of the Sierra Blanca in Peru.

    We use projected degradation of climatic suitability because:

    • it is conceptually linked to models used to calculate probability of collapse
    • it uses the same underlying variables models and scenarios
    • we can explore different time frames (temporal scale of degradation)
    • we can explore uncertainty due to different models, scenarios and collapse thresholds

    This repository includes all steps for fitting a environmental suitability model for tropical glacier ecosystems and compare the results with simulation results from a hybrid model of glacier ice mass balance and dynamics.

    The repository has the following structure:

    env folder

    The workflow was developed using different computers (named terra, humboldt, roraima), but most of the spatial analysis has been done in Katana @ UNSW ResTech:

    Katana. Published online 2010. doi:10.26190/669X-A286

    This folder contains scripts for defining the programming environment variables for working in Linux/MacOS.

    notes folder

    Notes about the configuration and use of some features and repositories: OSF project management with R, using the quarto book project, running pbs jobs in katana, fitting GLMM with the glmmTMB package.

    inc folder

    Scripts used for specific tasks: R scripts for functions, tables and figures, quarto documents for publication appendices and PBS scripts for scheduling jobs in the HPC nodes in Katana.

    docs-src folder

    This contains the (quarto-) markdown documents explaining the steps of the workflow from the raw data to the end products.

    Visit original content creator repository
    https://github.com/red-list-ecosystem/T6.1-tropical-glaciers-suitability-model

  • Laradmin

    基于laravel5.4的后台管理系统

    必须使用redis缓存,文件和数据库缓存不支持 tags()

    windows redis 下载地址:http://pan.baidu.com/s/1i56thcD

    快速使用 Redis 缓存以及 lv5 中的 Redis 缓存:https://laravel-china.org/topics/877

    前端模板请自行购买,如有侵权请联系作者。

    使用扩展

    用户权限角色扩展:zizaco/entrust

    redis扩展:predis/predis

    菜单选中激活扩展:hieu-le/active

    数据库扩展:prettus/l5-repository 查询返回的数组格式

    架构依赖说明:http://oomusou.io/laravel/laravel-architecture/

    jQuery DataTables API for Laravel:yajra/laravel-datatables-oracle

    日志读取扩展:arcanedev/log-viewer app日志配置’log’ => env(‘APP_LOG’, ‘daily’),

    图片处理扩展:intervention/image

    PHP Redis 扩展

    PHP cURL 扩展

    PHP OpenSSL 扩展

    PHP fileinfo 拓展 素材管理模块需要用到

    http://datatables.club/

    https://datatables.yajrabox.com

    行内编辑:https://vitalets.github.io/x-editable/docs.html

    升级日志

    https://github.com/DukeAnn/Laradmin/blob/master/UpdateLog.md

    安装方法

    1.拉取代码到本地,

    2.composer install

    3.设置 .evn 配置文件连接数据库和默认邮件发送服务器,设置APP_URL=http://laradmin.app,执行php artisan key:generate 生成key。

    4.运行迁移和填充

    5.php artisan migrate --seed

    安装完成

    演示地址:http://admin.amyair.cn

    测试账号:直接右上角注册即可

    基本说明

    1.权限管理扩展不使用l5数据库扩展。

    2.后台左侧菜单自动对应选中状态要求网站全部路由都要命名,并且同一菜单选项下的路由命名前缀一致,
    比如:admin.index,admin.create,admin.show,admin.edit等,资源型路由自动命名。
    后台左侧菜单上显示的都是index结尾的路由名。程序定向跳转时使用 route();
    顶级菜单下子分类的权限如果都被禁止了,请添加顶级菜单的用户权限,并设置成用户无权限,就不在显示该菜单。
    后台菜单显示原理,通过菜单uri查询用户权限,如果设置了该权限,进行验证是否有权限,没有就不显示,如果没设置就默认无权限要求。
    有子类的菜单项设置的uri不会输出在html中,只会输出一个JavaScript:;所以设置成不存在的路由名也不会报错,无子菜单的uri会用route()函数解析,
    如果路由名不存在会报错。

    3.页面内面包屑写入语言包,语言包中名称对应Route::currentRouteName();的值(路由名称),靠服务注入生成面包屑App\Presenters\Admin\CrumbsService,语言包中没定义的直接显示语言包的健值。

    4.路由不可以用闭包路由,路由必须命名否则Route::currentRouteName();无法生效,并且权限验证和菜单跳转全部使用的都是路由名称。

    5.权限认证使用权限绑定路由名在app/Http/Middleware/CheckPermission.php中间件中验证,表单提交权限放在app/Http/Requests中验证,如果路由名未绑定权限将不做权限限制。

    6.后台添加菜单时不允许添加为存在的路由名称,否则网站会崩掉。因为添加完菜单就会在左侧显示,但是路由名不存在就无法解析导致报错。如果不小心写错,要执行 php artisan cache:clear 清除缓存,并删除数据库中插入的错误数据!刷新页面即可。

    ###json格式通用于API

    {
        "code": 0,
        "url": http://...
        "message": "...",
        "errors": [
            {
                "code": 10000,
                "field": "user",
                "message": "用户 不存在。"
            }
        ],
        "pagination": {
            "total": 10,
            "per_page": 10,
            "current_page": 1,
            "last_page": 1,
            "from": 1,
            "to": 10
        },
        "data": {
            ...
        }
    }

    ####json返回值说明
    code处理结果状态码,成功为 0,必填

    url处理成功之后的跳转地址,可不填

    message处理完成的通知信息,可不填

    errors请求报错信息

    pagination请求的分页信息

    data请求的数据信息

    errorsdata不能同时存在

    返回使用

    return response(['code' => -1, 'message' => '账号或者密码错误'], 400); 自动转换成json

    或者

    return response()->json(['code' => -1, 'message' => '账号或者密码错误'], 400);

    AJAX解析

    var settings = {
            type: "POST",
            data:{},
            url: url,
            dataType: "json",
            success: function (data) {
                if (data.code == 0) {
                    window.location.href = data.url;
                }
            },
            error: function (XMLHttpRequest) {
                $('#login-error').show();
                if (XMLHttpRequest.responseJSON.code == -1){
                    $('#login-error-message').text(XMLHttpRequest.responseJSON.message);
                } else {
                    $('#login-error-message').text("请填写邮箱和密码");
                }
            },
            headers: {
                'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content')
            }
        };
        $.ajax(settings)

    数据库模型创建命令

    php artisan make:entity name,自动创建模型文件,数据库迁移文件,Repository下面的两个文件,Providers文件,可选生成Presenter,Validator,Controller文件

    php artisan make:repository name,生成模型文件,数据库迁移文件,Repository下面的两个文件

    Visit original content creator repository
    https://github.com/DukeAnn/Laradmin

  • buster

    Bulldog A Cache Buster Called Buster

    Buster busts your browser cache problems!

    Version

    v1.2.1

    Features

    • Cache busts your project’s files in place.

    • Fingerprints (renames) files based on their content using MD5 hash-based cache busting file names.

    • Replaces references in files to original file names with their MD5 hash-based file names.

    • Optionally outputs a manifest file to buster.manifest.json.

    • Simple and intuitive configuration using .buster.json.

    • Invokable via the command line and scriptable.

    • Easily integrates into your project workflow.

    Installation

    Install Globally

    This is the ideal solution if you want to use Buster as a general utility from the command line.

    $ npm install -g @4awpawz/buster
    

    Install Locally

    This is the ideal solution if you want to integrate Buster into your project.

    $ npm install --save-dev @4awpawz/buster
    

    Important

    • Buster Is Destructive. Buster does not make backups of your files. Buster performs its operations directly on the files that operational directives indicate. See “A Typical Buster Workflow” below.

    • Versions prior to v1.1.0 generated hashes based solely on the content of the files targeted by its operational directives. This opened up the opportunity for clashes on files that had no content. To address this issue, beginning with v1.1.0, Buster will generates unique hashes for all files by including the path of the file targeted by operational directives as well as its content.

    Buster Primer

    Site Relative File Paths And Site Relative URLs

    In the documentation that follows, references are made to site relative file paths and to site relative URLs.

    1. “site relative file paths” pertain strictly to your project’s file structure. They are used to declare the input in operational directives when declaring the file paths to assets in your project that you want targeted by Buster for cache busting.

    2. “Site relative URLs” pertain strictly to your website’s runtime environment and are used to reference assets throughout your site (e.g. the src attribute of an img tag, the href attribute of a link tag, the URL() CSS function declared inside of a CSS stylesheet).

    The important thing here is to understand that in order for Buster to perform its cache busting you, the developer, must insure that your site employs site relative URLs when referencing its assets. This is because Buster converts your site relative file paths to site relative URLs which it then uses to search the content of your site’s files for site relative URLs that need to be updated to point to the assets it has fingerprinted with unique hashes.

    A Typical Buster Work Flow

    Your development build tool generates your production ready site (as opposed to development) into your project’s release folder. When configuring Buster to cache bust your site, you would target your project files in the release folder by using site relative file paths in your Buster configuration’s operational directives. Then from the root of your project you can use the command line to run Buster to cache bust your site in the release folder. You can then run your site from the release folder to insure that it is functioning as expected and once it is determined that it is functioning as expected you can then deploy your site directly from the release folder to its server using a command line utility such as rsync.

    In a typical website project with the following or similar project structure

    |- myproject
    |- |- release/
    |- |- |- media/
    |- |- |- |- housecat.jpg
    |- |- |- index.html
    |- |- .buster.json
    

    the site relative file path used in an operational directive to target housecat.jpg would be release/media/housecat.jpg and the site relative URL used to identify the image file in the browser would be media/housecat.jpg.

    Operational Directives

    Buster employs a concept called an Operational Directive, abbreviated od, which you declare in your .buster.json configuration file and which Buster uses to direct the operations it performs on your project’s files. Each od is comprised of 2 parts, an input, and an operation.

    Input

    A site relative file path to one or more files.

    Supports globs/wildcard patterns.

    Important Buster assumes that all site relative file paths are relative to process.cwd().

    Important Buster implements its glob support using node package glob. Please refer to node package glob should you need additional information on using globs with Buster.

    Operation

    Indicates the actions that Buster is to perform on the od’s input file(s). It is a number preceded by a colon which separates the number from the input (e.g. “:1”). The following 3 operations are currently supported:

    :1

    Apply this operation only to those files whose own file names are to be fingerprinted for cache busting purposes (e.g. .jpg, .gif, .map).

    The format of each unique MD5 hash-based file name will be [original file’s base name].[unique hash].[original file’s extension] (e.g. cat.[unique hash].jpg). Should the original file’s base name contain 1 or more periods (e.g. main.js.map) the format of the MD5 hash-based file name will, as an example, be main.[unique hash].js.map.

    :2

    Apply this operation only to those files whose contents are to be searched for site relative URLs that point to assets whose file names have been fingerprinted and therefor need to be updated and whose own file names are not to be fingerprinted for cache busting purposes (e.g. .html).

    :3

    Apply this operation only to those files whose own file names are to be fingerprinted for cache busting purposes and whose contents are to be searched for site relative URLs that point to assets whose file names have been fingerprinted and therefor need to be updated (e.g. .css).

    Hashed File Name Format

    The format of each unique MD5 hash-based file name will be [original file’s base name].[unique hash].[original file’s extension] (e.g. cat.[unique hash].jpg). Should the original file’s base name contain 1 or more periods (e.g. main.js.map) the format of the MD5 hash-based file name will, as an example, be main.[unique hash].js.map.

    Operational Directive Examples

    Example Operational Directives Using Site Relative File Path:

    Given the following project structure

    |- myproject
    |- |- release/
    |- |- |- media/
    |- |- |- |- housecat.jpg
    |- |- |- index.html => contains img tag with a site relative url for its src i.e. <img src="https://github.com/media/housecat.jpg">
    |- |- .buster.json
    

    and running Buster from the command line in the myproject folder with the following operational directives

    `release/media/housecat.jpg:1`
    `release/index.html:2`
    

    will result in the following:

    |- myproject
    |- |- release/
    |- |- |- media/
    |- |- |- |- housecat.[unique hash].jpg
    |- |- |- index.html => now contains img tag whose src attribute points to hashed img i.e. <img src="https://github.com/media/housecat.[unique hash].jpg">
    |- |- .buster.json
    

    Example Operational Directives Using Site Relative File Paths And Globs:

    Given the following project structure

    |- myproject
    |- |- release/
    |- |- |- media/
    |- |- |- |- housecat.jpg
    |- |- |- |- purringcat.jpg
    |- |- |- |- bigcats/
    |- |- |- |- |- lion.jpg
    |- |- |- |- |- tiger.jpg
    |- |- |- index.html => contains img tags with site relative urls for its src e.g. <img src="https://github.com/media/housecat.jpg">, <img src="/media/bigcats/lion.jpg">
    |- |- .buster.json
    

    and running Buster with the following directives

    `release/media/**/*.jpg:1
    `release/**/*.html:2`
    

    will result as follows:

    |- myproject
    |- |- release/
    |- |- |- media/
    |- |- |- |- housecat.[unique hash].jpg
    |- |- |- |- purringcat.[unique hash].jpg
    |- |- |- |- bigcats/
    |- |- |- |- |- lion.[unique hash].jpg
    |- |- |- |- |- tiger.[unique hash].jpg
    |- |- |- index.html => now contains img tags whose src attributes point to hashed img i.e. <img src="https://github.com/media/housecat.[unique hash].jpg">, <img src="/media/bigcats/lion.[unique hash].jpg">
    |- |- .buster.json
    

    buster.json Configuration

    Important Buster expects .buster.json to reside in your project’s root folder, alongside package.json.

    {
        "options": {
            "manifest": true,
            "verbose": true,
            "ignore": "media/original/**/*.jpg,media/original/**/*.gif"
        },
        "directives": [
            "release/media/**/*.jpg:1",
            "release/./index.html:2",
            "release/css/test.css:3",
            "release/script/test.js:3"
        ]
    }

    Options

    Buster supports the following configuration options:

    ignore

    A quoted list of one or more comma separated site relative file paths to files that are to be ignored, defaults to "".

    Supports globs and wildcard characters patterns.

    manifest

    A boolean, true to save the manifest to buster.manifest.json in the project’s root folder, defaults to false.

    verbose

    A boolean, true to output verbose logging, defaults to false.

    Typical Workflows

    Integrating Buster Into Your Project’s Workflow

    Install Buster locally:

    myproject > $ npm install -D @4awpawz/buster

    Then create a .buster.json configuration file in your project’s root folder, alongside package.json:

    {
        "directives": [
            "release/media/**/*.jpg:1",
            "release/css/**/*.css.map:1",
            "release/scripts/**/*.js.map:1",
            "release/**/*.html:2",
            "release/css/**/*.css:3",
            "release/scripts/**/*.js:3"
        ]
    }

    Then add the following to your project’s package.json’s scripts property:

    "scripts": {
        "bust": "buster"
    }

    You can then run buster from the command line by invoking it as follows:

    myproject > npm run bust

    Calling Buster From Within A Script

    Buster can be called from within a script, allowing it to be used as part of a greater workflow:

    const buster = require("@4awpawz/buster");
    
    const paramsConfig = {
        options: {
            manifest: true
        },
        directives: [
            "release/media/**/*.jpg:1",
            "release/css/**/*.css.map:1",
            "release/scripts/**/*.js.map:1",
            "release/**/*.html:2",
            "release/css/**/*.css:3",
            "release/scripts/**/*.js:3"
        ]
    }
    
    await buster(paramsConfig);

    Filing Bugs And Feature Requests

    Changelog

    v1.2.1

    This release only encompasses changes to the project’s README.md file, specifically for the addition of the solicitation to ‘Buy me a coffee’.

    v1.1.1

    This release only encompasses changes to the project’s documentation in this README.md file.

    v1.1.0

    This release includes an improved hashing algorithm that generates unique hashes for all files, including those that have no content.

    v1.0.0

    This is the first major release of Buster and incorporates many breaking changes from prior versions. Most notably, prior versions had a “safe mode” configuration option that would instruct Buster to cache bust “in place”, meaning that it would not create backups and would not be able to restore files to their prior state. As it turns out, the vast majority of Buster’s users are using “safe mode” because it fits their workflow of generating their site into a dedicated folder that can be cache busted and that could easily be repopulated by just regenerating the site. These changes were implemented to refactor Buster to precisely match this typical workflow.

    v0.3.1

    This release addresses fixes for security warnings for packages used internally by Buster only. There are no changes to the code base.

    v0.3.0

    This release addresses one bug and fixes for security warnings for packages used internally by Buster only. Also landing with this release is reduced console output; use the verbose config option if needed.

    Major bug fixes:

    • Addresses issue #14 which could cause Buster to mangle hashed file names. Please note that beginning with this release, Buster now generates hashed file names as [hash]-[file name].[file extension]. You are strongly advised to upgrade your projects and rebuild them.

    v0.2.4

    This release addresses fixes for security warnings for packages used internally by Buster only. There are no changes to the code base.

    v0.2.3

    Major bug fixes:

    • Addresses issue #13 which would cause Buster to crash when reading a configuration file that doesn’t exist.

    • Addresses issue #12 which would cause Buster to crash when setting paramsConfig to a default value of {} to indicate that it wasn’t passed.

    v0.2.2

    This release includes no changes to the code base.

    • Addresses issue #11 which seeks to lockdown all project dependencies including descendants using NPM’s shrinkwrap.

    v0.2.1

    Major and minor bug fixes – includes but not limited to the following:

    • Addresses issue 10 which would cause buster to fail when reading command line configuration data belonging to the application that launched it with paramsConfig.

    • Addresses issue #9 which would sometimes cause restore to fail. This fix totally replaces the one introduced in v0.2.0, and now handles the issue earlier in the restore processing cycle.

    v0.2.0

    Major refactor – includes but not limited to the following:

    • Introduces experimental “safe mode” feature, resolves #6.

    • v0.1.6 breaks handling of backup files bug, fixes #5.

    • Removes hashed files from the manifest returned by glob during restore.

    • Implements new resolution of destination paths.

    • Removes the “file-exists” package from the project.

    • Catching some async exceptions to prevent unresolved promise exceptions.

    • Configuration attempts to resolve from paramsConfig (i.e. passed via a script) first.

    • Updated README.md

    v0.1.6

    • Addresses a bug in command-line processing which would cause Buster to crash when the user enters only “bust” or “restore” from the command-line.

    • Addresses a bug in od processing which would cause Buster to crash when attempting to create folders that already exist.

    • Addresses a bug in od processing which would cause Buster to crash when attempting to delete files that no longer exist.

    Copyright And License

    Copyright © 2018, Jeffrey Schwartz. Released under the MIT license.

    Community

    For help, discussion about best practices, or any other conversation that would benefit from being searchable:

    Discuss Buster on Github

    For casual conversation with others about using Buster:

    Discuss Buster on Twitter and other social media..

    Show Your Appreciation

    image

    Please 👀 watch and leave us a 🌟 star. 🙂

    Visit original content creator repository https://github.com/4awpawz/buster
  • tiobe-scraper

    TIOBE Scraper

    License: MIT

    A simple web scraper to fetch the latest programming language rankings from the TIOBE Index. The data is extracted using Bun, TypeScript, and Regular Expressions, then saved as JSON and YAML.

    Features

    • Fetches the latest programming language rankings from TIOBE
    • Extracts rank, name, percentage, and change in ranking
    • Saves results in JSON and YAML formats
    • Uses Bun for fast execution

    Installation

    Ensure you have Bun installed on your system:

    curl -fsSL https://bun.sh/install | bash

    Then, clone the repository and install dependencies:

    git clone https://github.com/BaseMax/tiobe-scraper.git
    cd tiobe-scraper
    bun install

    Usage

    Run the scraper with:

    bun run start
    or
    bun run scraper.ts

    Output

    The results will be saved as:

    • tiobe.json (structured JSON format)
    • tiobe.yaml (YAML format for easy readability)

    Example output:

    [
      {
        "rank": 1,
        "name": "Python",
        "percentage": 23.88,
        "change": "+8.72%"
      },
      {
        "rank": 2,
        "name": "C++",
        "percentage": 11.37,
        "change": "+0.84%"
      }
    ]

    License

    This project is licensed under the MIT License. See the LICENSE file for details.

    Idea

    The idea of this scraper comes from thecompez/tiobe-scraper project.

    Author

    Max Base (c) 2025

    Visit original content creator repository https://github.com/BaseMax/tiobe-scraper
  • tiobe-scraper

    TIOBE Scraper

    License: MIT

    A simple web scraper to fetch the latest programming language rankings from the TIOBE Index. The data is extracted using Bun, TypeScript, and Regular Expressions, then saved as JSON and YAML.

    Features

    • Fetches the latest programming language rankings from TIOBE
    • Extracts rank, name, percentage, and change in ranking
    • Saves results in JSON and YAML formats
    • Uses Bun for fast execution

    Installation

    Ensure you have Bun installed on your system:

    curl -fsSL https://bun.sh/install | bash

    Then, clone the repository and install dependencies:

    git clone https://github.com/BaseMax/tiobe-scraper.git
    cd tiobe-scraper
    bun install

    Usage

    Run the scraper with:

    bun run start
    or
    bun run scraper.ts

    Output

    The results will be saved as:

    • tiobe.json (structured JSON format)
    • tiobe.yaml (YAML format for easy readability)

    Example output:

    [
      {
        "rank": 1,
        "name": "Python",
        "percentage": 23.88,
        "change": "+8.72%"
      },
      {
        "rank": 2,
        "name": "C++",
        "percentage": 11.37,
        "change": "+0.84%"
      }
    ]

    License

    This project is licensed under the MIT License. See the LICENSE file for details.

    Idea

    The idea of this scraper comes from thecompez/tiobe-scraper project.

    Author

    Max Base (c) 2025

    Visit original content creator repository https://github.com/BaseMax/tiobe-scraper