A bigger picture

@aparcar, @mwarning -- Loving the download wizard, it does look very much LineageOS-like! I'd make a suggestion to display "What's a snapshot" explanation if the Snapshot is selected from the left drop-down.

I also support @Ansuel's suggestion, I understand some people would prefer to turn off the "phone home for update" feature, but would be nice to have new release notification integrated into SSH banner and WebUI. Maybe make it a separate package because it would most likely require SSL support. I don't know enough about the backend, but the client side would probably be trivial.

1 Like

We all so need a news ticker on the OpenWrt home page to highlight packages or mager changes to OpenWrt like the jump from ar71xx to ath79. Or in bed the OpenWrt twitter feed.

1 Like

There is a new announcement list here which contains updates. It will be promoted during the next release cycle.

2 Likes

Do I read this correctly, that this solution is meant to replace the complete toh and packages namespaces in the wiki?

The current packages/toh view could be a) replaced as it seems fully auto generated or b) extended to an machine readable format.

That does not include the wiki. I'm in favor of using more templates and have single pages per device instead of merging them, but that's another thing to discuss.

1 Like

There are 2 different places to report issues:

I know that the former is for OpenWrt core and the latter is for LuCI+Packages.
But, this is still confusing.
Perhaps we should use a single place like GitHub for both?

A similar problem with Git repositories:

Some of the repositories are mirrored to GitHub, and some not.
Some use GitHub as a primary location, others don't.
This doesn't look intuitive at all.

4 Likes

I'm afraid, I can not follow...

  • packages namespace: script generated
    • replace?
    • keep as is?
    • extend to machine readable format?
  • toh namespace (devicepages + dataentries): nothing script generated
    • replace?
    • keep as is?
    • extend to machine readable format?

Keeping both (on top of your new system) would mean double work, because two systems need to be filled with data.

And seeing vgaeteras posting I think it would make sense to clarify the in-scope/out-scope of this change in the first posting.

1 Like

Hi @vgaetera, yes it's confusing and we try to find a better way for that. Some time ago a vote decided to move on to a self hosted gitlab. We're working since (slowly) on ways to migrate. There are also concerns parts of the community would be cut of by removing the GitHub repositories, so very much work in progress. A compromise of usability (as in GitHub) and in independence (as in self hosted) is not trivial.

2 Likes

Hi @tmomas, I'm mixing up the words which is confusing, I'll try to clear it up:

packages overview: I see the following additional requirements:

  • machine readable via a JSON API
  • support multiple releases (incl. snapshots)

If that's possible with the current Wiki, great! If not, we could consider creating something similar to Alpines Package overview. Before doing any additional steps I'd like to hear @bobafetthotmail opinion on that.

toh: I understand the toh as both the table of hardware as well as the techdata per device. That could be beautified and rendered based on community maintained YAML files. Combined with information which whatever wikidevi mirror is up by that time.

device pages: The trickiest part. My suggestion would be to create a pretty table of hardware + techdata pages (see above) and remove such information from the current wiki. Only keep information on installation, PCB pictures, special tips and tricks, boot logs, etc. just everything which is not reasonable put into a YAML files.

2 Likes

The indexing and machine-generated pages for each package are all done by a shell script running in the wiki server that was originally designed to index multiple releases, so it can index snapshot too.

But I see you have your own python script, and I'm assuming it is much better than my shell script is, so it's probably better to migrate the system to use yours instead, although it's probably not a priority right now.

The harder part has always been actually showing the information with a decent table interface, so far we are using Dokuwiki's data table plugin, but it's really less than optimal, and lags every time you open a package table page, and its interface is weird with bars only on the bottom while it cuts stuff on the side, and so on. It's the same thing as the ToH really.

If you have a decent frontend page for that, like the Alpine page you linked, I'm ok with migrating to it.

1 Like

Smells like you have already a usefull application in mind?

Yes please! It's ffffreaking fast, which the current solution in the wiki isn't. Not at all.

New task: Create new packages table

Create new packages table

  1. Create frontend for the packages JSON data
  2. Integrate new packages table into the wiki

Who can take care of #1?

Take a look how LineageOS implements per device documentation in git using static files:

1 Like

Finally I found some more time to go into the details.

I'm sometimes having difficulties in understanding your argumentation.

Regarding this case, the solution is just one section below:

Click the link to Table of Hardware, enter your Model and that's it. Perhaps filter further for specific version.
Not that difficult.

Maybe we should pull the second section ahead, in order not to confuse users with the overwhelming downloads.openwrt.org?

Alternative way:

Easy enough, isn't it?

Alternative way: Use the wiki search to find a devicepage (I just picked a random one that came to my mind)
grafik

Easy enough, isn't it?

You see why I'm having difficulties with your argumentation?

As we have seen above, identification of the OpenWrt image to flash is not that hard.
But if you want to say "Wouldn't it be much nicer and easier, if I could just click a button in LuCI Upgrade to new OpenWrt release now?", then I'm fully with you.

Good point and a very good example of how it should not be.
Slight improvement: Separate page for the Archer C7, without the other devices. And still, a single page for a single device would be the clearest possible solution, no doubt.

Solution: Use the right tool for the job.

  • Use https://openwrt.org/toh/views/toh_fwdownload if you know already for which device you are searching firmware images for. Why would you need the USB ports and Wifi standards and whatnot if all you want to do is search for a firmware image?
  • Use other views https://openwrt.org/toh/views/start if you are searching for information which device to buy. Why waste precious space with download links when you don't even know which device to buy?

Assuming that you mean scrolling in horizontal direction (left-right), not vertical:
Since the table size is dependant on the amount of data, any table that has lots of columns will have that problem, and require more or less scrolling in horizontal direction, IMHO.

Regarding complex search masks: I agree, and I'd like to have filter capabilities like geizhals.de, backed by a database that is built for exactly this purpose (unlike the current data-plugin solution, which is of general purppose nature, and hence not performing well with big amounts of data)

As I understand, current YAML information has been extracted from the ToH.
Where does the future YAML information come from? How exactly are those YAML files generated? Manually via text editor?

(Remark: The above describes in principle also the current situation in the wiki.)

What does that mean in terms of

  1. initial entering of information into YAML files
  2. continuous updates of any kind to YAML files?

What is your expectation regarding the work to be done by the community? And who is the community?

reviewable by who? Wouldn't the reviewers be "a small group of people with privileged access"? Isn't that a contradiction to what you said above?
Who are the reviewers? Do we have the people on board to do the job and do those people have enough time? Just asking, because often users hear that there is not enough manpower available to do $this and $that. Adding more workload in this situation might be counterproductive.

Already in the current wiki we do not need to write the same instructions multiple times.
Examples:

All it needs is a link to these instructions placed on the devicepage.
If it is deemed already too complicated for a user to click a link, there is also the possibility to include complete pages or sections thereof in other pages. The Linksys WRT AC pages (and others) make use of this. Write once, use many times.

1 Like

Yes, it is clearly easy if you know the positions of the links and the overall flow. I think my main problem with the OpenWrt wiki is that it's very noisy. A lot of information are provided, a lot of links to click, a lot of sub pages.

My intention here is that someone using OpenWrt for the first time isn't stopped by crawling through a complex website. It's complex because it's complex, sure, but to get a stock device to OpenWrt it usually just requires the following steps:

  • Find you device in a list
  • Download the factory image
  • Upload it to your running router and wait 2 minutes
  • Connect to it
    • via WLAN do this
    • via Ethernet to that
  • Change Wifi Key
  • Enjoy

Sure that doesn't cut it for many many people with complex setups, but literally all my none OpenWrt related acquaintances /friends are happy with that setup.

So I want to create/design/build a flow that brings you through the list in a pleasant way.

I think Freifunk is a good example for people having a similar motivation and therefore simplify the tooling to get routers flashed.

So let's make it a policy to have one page per device?

To do that we need data easily accessible in a machine readable format to put it into a database. YAML maybe :slight_smile:?

Ideally the people maintaining the current data migrate to the YAML system. It could become a policy, as discussed in Hamburg, that developers adding a new device to openwrt.git also add a YAML file to devices.git containing meta data.

I'm in contact with the maintaner of the current wikidevi instance (clone), I'll ask if we can do some parsing to automatically create YAML files for devices.git.

We can also design a schema with a simple web interface to create such YAML files. I'm all up to learn about existing standards :grin:

It's a long shot but the Lineage team has per device maintaines. Ideally we'd had a community of router users which test new features on their devices and can give their approval. Candidates are people within this forum providing custom firmwares or developers/companies interested in keeping their devices up to date. Say I like my Linksys wrt3200acm, so I'd test each RC and set in some machine readable file that I approve the device as tested (or broken).

So, the same people doing the testing could also have an eye on the device descriptions.

I'm working on that in parallel :slight_smile:

Regarding the designed installation flow. Say I want to run Lineage OS on my Android phone. Go to lineage.org

Find a very prominent Download button.

Next I see all supported devices on the left, say I got a Fairphone 2

One arrives at a overview which contains the firmware images. Now Lineage got a somewhat questionable just update your device every week approach, but we wouldn't have to apply that to OpenWrt of course.

To understand what to do with the firmware image, the prominent Installation instructions show what to do.

This manual looks the same for every device, only varying in key combinations and partly special notes. The manual is extremely short and helps even a new user to quickly get things up and running.

The "Device info" page looks again the same for each device, so it's easy to compare multiple devices. It also links to in-depth guides which are however beyond scope of many regular users.

So, I want that but with routers.

4 Likes

While on Dokuwiki it won't look nearly as good as that website, it is relatively easy to do.

it requires A LOT of man hours if made by hand since OpenWrt supports (or supported) A LOT of devices.

This kind of pages lend themselves a lot to be auto-generated by a script that looks at the ToH's database to extract the relevant information.

This is technically possible on OpenWrt wiki too, I did create some "summary" or "cheatsheet" pages that are just fetching most of their content from other pages using the wiki's functionality (or a plugin, don't remember). https://openwrt.org/docs/guide-user/network/ucicheatsheet

The main reason we decided against strong automation like that and went with templates instead is that we wanted to rely on contributors to add most of the information and/or keep it updated.

But I don't know how much that worked out or how much is still required, given that now we have installation instruction and most device data in the commit that added the device.

I think it should be OK to mostly automated if we leave some places where people can just edit the page to add notes between the automatically generated blocks

That's fine and all, but it's a lot of work and I would really like to get something better for the ToH or package tables

Devices

So I got a scraper which works for the OpenWrt wiki, it could also work for wikidevi. Within some 10 minutes we have 1500 yaml files containing the current state of knowledge. Next thing we design something pretty, or ask our friend at lineageos if we can recycle parts of their device info templates. Within a week we have a page running containing 1500 devices. These yaml files should contain a wiki link to the single page device wiki pages, containing much more in depth information. At this point we could remove the device hardware highlight tables and download links from the wiki entries.

Regarding maintenance, for new devices we can make it a requirement so developers have to provide that. The Kernel requires YAML specifications of device-tree files, I think we can do something similar. For existing devices, either the existing data maintainer learn YAML and slightly change their workflow (meaning to use Git) or the scraper scripts runs every X hours.

Fresh information on built firmware is provided via generated JSON files directly from the buildbots. (@jow ping), so we can have a firmware wizard or lineage like download overview, pointing to wiki pages and hardware overview.

At the same time, all yaml information is read into a database, allowing a search mask based on features. That's possibly doable via a some Flask + Peewee scripting.

Packages

What exactly do we want? Something like Alpines solution seems very doable. Do we need any special bells and whistles?

1 Like

I wasn't clear in my question before. I'm not a web developer, doing anything beyond basic static HTML with css styling is beyond me, and afaik this applies also for tmomas.

The main thing that I'm asking is, do we have someone or something that can generate these pages (not just the data backend), and also the actual page with the tables or not?
It seems the repos you linked do have a website structure, is that an actual website+ backend ready to be hosted?

Because just hosting these new pages and the indexing backend scripts/databases on the same wiki server is doable and probably much lighter than the current system for both packages and hardware table (that are using mysql as a key-value storage and run like complete garbage as a result).
I'm not the owner so you probably need to ask jow or other team members for permission too.

While also possible, I'm really NOT a fan of using the wiki's own "embed HTML/PHP" feature to serve the webpages from inside the wiki itself, which is disabled afaik, as that is going to open doors to exploits and haxxoring. So I'd like to veto that if possible.

Packages
What exactly do we want? Something like Alpines solution seems very doable. Do we need any special bells and whistles?

Afaik no, that's as good as it gets for the package table.

I would like to get an equivalent to the "package indexes" https://openwrt.org/packages/index/start too, which was inspired from Debian's package lists and can give a different look into things. https://packages.debian.org/stable/

But it really isn't a requirement, as long as the package table does not suck it's good for me.

Just wondering about the move to Gitlab - is there an actionable list somewhere of things that need to be done? If you don't know, GNOME's had a pretty nice looking self-hosted Gitlab for years now [1].

Currently the OpenWrt project is spread across: Github, git.openwrt.org, and Flyspray, not to mention the IRC, forum, wiki and mailing lists (but I am not saying the latter four being separate is necessarily an issue).

Issues with the current set of tools:

  • Github: no self host option
  • git.openwrt.org: fine for basic hosting, but lacking a lot of features
  • flyspray: lacking a lot of features, for instance doesn't have code highlighting in patches; can't be easily referenced in the related pull requests

The biggest advantage of a move to self hosted Gitlab is consolidating all the issue tracking, CI, and hosting in one place, making it easier to contribute to the project, just like it did for GNOME, instead of logging in to three or more different sites that are all built with different technologies, UI's, have different latency when accessing them - and that's just for the user. The other side of it is the question of backups, administration etc.

And of course it is all easier said than done, but just wanted to point out that it looks like Gitlab will actually help an org like OpenWrt migrate to any Gitlab instance [2]

  1. https://about.gitlab.com/blog/2020/09/08/gnome-follow-up/
  2. https://gitlab.com/gitlab-org/gitlab-foss/-/issues/40541
2 Likes

In about an hour I was able to complete a dry run migration to GitLab.com (not self-hosted) for my forks of the OpenWrt core, LuCI and packages repositories and so far it seems to have pulled the code over painlessly (no big surprise). However, I cannot test import of issues, merge requests, and all that other stuff, because I am only copying my fork over, not the real repository. A repository owner will need to test those features. But, according to GitLab, all of that is supported.

On the other hand, what I can report that is at least semi-useful is it looks like the self-hosted GitLab will need to have GitHub integration enabled because that is not enabled by default in self-hosted installations. This feature allows login with GitHub.

The second point is that it may be a good idea to ask people to create their GitLab account ahead of the migration, if they want to be sure their profiles are matched with their new GitLab account.

Scrolling through the commit log, it appears that lot of contributors already have a GitLab account, either by creating one or by logging into GitLab with GitHub at some point in time, because many authors look the same as they do on GitHub (name, avatar). How that works.

Use the GitHub integration

Before you begin, ensure that any GitHub users who you want to map to GitLab users have either:

  • A GitLab account that has logged in using the Login with GitHub icon - or -
  • A GitLab account with an email address that matches the publicly visible email address in the profile of the GitHub user

User-matching attempts occur in that order, and if a user is not identified either way, the activity is associated with the user account that is performing the import.

If you are using a self-managed GitLab instance or if you are importing from GitHub Enterprise, this process requires that you have configured GitHub integration.