J-Core, RiscV and Open source GPS - A conversation with D. Jeff Dionne

The video and transcript below is the first exclusive for riscvnews.com - An interview with embedded chip designer Jeff Dionne.

For many years Jeff has spearheaded development of the J-Core open source ISA. In our interview we try and look at where J-Core and RiscV differ and what opportunities both projects might have to learn from each other.

If you’re using an old school audio player we also have a plain MP3 file of the interview.

If you would like to discuss this you can do so in this thread on Hacker News.

 

Kirin V.: Today we’re having a conversation with Jeff Dionne. Jeff is a chip and embedded systems designer for coresemi.io and has an extensive history of contributions to the open source ecosystem - including the initial development of μClinux. Jeff - Thanks for joining us at riscvnews.com. First of all, could you talk a little bit about your background and what brought you into the world of open source hardware?

D. Jeff Dionne: Yeah. So I’m not really an open source programmer guy - people think I am. But actually, I’m a hardware guy. So from very early on, you know, 15 years old, or whatever - I did hardware for various different companies and projects both analog and digital. So that’s that’s basically my background.

Kirin V.: What kind of analog circuits would you work on back in the day?

D. Jeff Dionne: I like to do very high resolution analog stuff. One of the projects that I did way back in my teens, in the late eighties/early nineties was multi kilowatt amplifiers, very large amplifiers. I think the largest one I did was 100 kilowatts. But those those are the kinds of things that I started off doing. And then we needed controls for that. And so I started having to do digital circuits for that. And way back in the day you had to build the entire system yourself. And so that’s the reason I got into digital logic and computers and and that kind of thing. And of course, you become a programmer because you gotta program your state machines, because that’s what a computer is - a giant state machine. One of the things I did after that was I reverse engineered the chipset for the IBM XT. I found out later that the reverse engineering for that chipset that I did ended up in some of the first early clones in Taiwan of the IBM PC architecture.

Kirin V.: Oh, that’s exciting.

D. Jeff Dionne: After that I was doing embedded hardware for instrumentation in the power industry. I did a lot of test equipment for things like relay test sets which are the things that protect transmission lines. At that time we were using Hitachi 6303 processors. And that’s sort of the first time that I started thinking about FPGAs and embedded hardware.

Kirin V.: The hardware that you just mentioned - Was that an FPGA?

D. Jeff Dionne: Eventually It started to to have some FPGAs, but that was way before FPGAs were capable of doing the kinds of things that they are now. I think it was probably something like 1995 when I realized that if you wait long enough you’re gonna be able to do soft cores that run in an FPGA - it eventually happened of course. The last time I looked at it when it wasn’t ready was sort of the early 2000’s and it just ticked over to being possible to do this kind of work in an FPGA around 2009 at which point I said “Okay, we’re ready to go”. So let’s look at the power industry again, and let’s do a complete system on chip in an FPGA as a prototype. And that’s where we are today - We came from there.

Kirin V.: Yeah, it certainly seems like if FPGAs have come a long way in the last 10 years or so. However it seems that as more of the providers merge with larger companies we’re at risk that at some point that this work will be seen as not as profitable as the ASIC work of the big manufacturers. Do you think there’s a there’s a risk that we’ll see a plateauing of capability FPGA wize?

D. Jeff Dionne: I think that we’ve basically already seen that, in a different way than you might be thinking. It’s never been profitable for the FPGA companies to support their platform as merely a prototyping platform, because that’s what we look at it as, right? I think the Linux community thinks that J-Core is both a hobby project and an FPGA only project.

But keep in mind that we built J-Core as the CPU that was going to go into the chips that we use for instrumentation. That’s what it’s for. It’s not an FPGA core, only the prototypes are in FPGA. It’s an ASIC core, and the FPGA Company has never made, basically a dime if you want to put it that way off of us. We didn’t sell a lot of those units. I was reading an article today; TSMC just announced overnight here in Japan that in Germany they’re gonna build a FAB [for] 28 nanometer, 16 nanometer 40,000 wafer starts per month opening in 2027. Okay, 40,000 wafer starts per month at a reasonable size. How many chips is that per year? About 1 billion chips a year. So if you’re doing that kind of an investment, how many FPGAs do you need to sell in order to in order to pay back that investment, or in order to be more than a rounding error? And and that’s that’s why what we find is the open source tools like for instance Yosys or… we don’t use verilog it’s a horrible language. So you know GHDL.

Kirin V.: Sorry, did you say GHDL or VHDL?

D. Jeff Dionne: We we write all our cores in VHDL. And the open source implementation of that is GHDL.

Kirin V.: Understood, thanks.

D. Jeff Dionne: Yeah, so those tools are not supported like you would think they would be by the FPGA vendors or the or the Fabs. And the reason is because any impact that they have is a rounding error. And so getting back to your original question, yeah, it’s kind of difficult to see how a community forms around a platform rather than a tool set that has legs for very long. You’ll continuously have to change which FPGA platform you’re using and what tools you use. And that’s kind of built into it as a nature of the beast. It’s not like Linux, where suddenly an FPGA CPU core RiscV or something is gonna take over everybody’s design space. And and there’ll be a few winners and a couple of losers and this is the majority of the market. The economics just aren’t there.

Kirin V.: Yeah, that makes sense, and it sheds some light on the underlying dynamics of why that’s happening. Thanks for that insight. So you’re you’re based out of Japan at the moment. Is that right?

D. Jeff Dionne: That’s right, our company Smart Energy Instruments and Core Semiconductor. Smart Engergy instruments is a company out of Canada, was a company out of Canada. And Core Semi, our headquarters is in Delaware. For both companies the engineering teams are here in Japan, so I’m here in Japan.

Kirin V.: Well, certainly historically, Japan has a great engineering culture, especially around low level, embedded chips and manufacturing. And there is a heritage there - so you can see why that might happen.

D. Jeff Dionne: Yeah, for sure, a lot of engineering prowess, not necessarily software development prowess. It was interesting - I was reading an article today somebody in my Linkedin feed saying that Japan has really lost its way because it failed to make the transition to software. That’s just a complete misunderstanding of how the economy here in Japan works, and basically how the technology economy works in the world. What do you think software runs on my friend? It comes from somewhere!

Kirin V.: You make a very good point. I thought a little bit about this myself, and the fact that in the last few years software developers and the role they play in society has kind of come to the forefront a lot more than hardware engineering has. I think - and I’d be curious what your thoughts are here as well, but I think it might be partly because as a point of leverage, you get a lot more out of how flexible everything is at the software layer and even though hardware engineering is probably a more difficult job and requires more ramp up from a training, education and hands on experience point of view, it’s just not seen the same way.

D. Jeff Dionne: Yeah, I think that’s right. So when I started with one of our first companies at the time it was called Arcturus Networks, or real time control. We used to say that we want to make the platform disappear right? So you would never know what hardware is underneath it. You don’t care, and you don’t want to have to care. And today I think that that’s actually happened to such an extent that people don’t even realize what it is they’re talking about when you go out and talk to even people who consider themselves to be technology experts, they’ll say silly things like, “We’re doing a hardware VPN”, or “We’re doing an open phone platform”, or “We’re doing an open tablet platform”, or whatever it is that we happen to be working on at the time. And they’ll say, Why are you doing any of that? We have software now. What do you think runs on your phone? “I have a phone. I don’t need chips”. There are chips in your phone! You realize your phone is made of chips. There are supposedly technology, savvy people who think that we have moved on from hardware and and that’s a stunning thing for me. And it seems to really have caught on in the popular opinion of a lot of folks.

Kirin V.: They’re taking the phrase move to the cloud a little bit too literally.

D. Jeff Dionne: Little bit too literally a little bit too literally, and at the same time I’ve gotten used to people calling me, you know, a software guy or an IT guy when in actual fact I think about programs as I said, as part of a state machine and a piece of hardware.

Kirin V.: Mmmm, yeah. So just to focus a little bit more on J-Core. I listened to a lecture you gave at COSCUP 2023 in Taiwan recently, and you very briefly talked about RiscV, and one of the things you said was - which I thought was interesting was “we were here first!”. What drove the early development of J-Core when you, were reviewing what was available at the time? I assume that RiscV wasn’t really on your radar, because it was too nancient?

D. Jeff Dionne: RiscV didn’t exist at the time. RiscV came along well after J-Core. It may have been an academic project still, but it wasn’t something you could get your hands on and do anything with. So we’re talking about, you know 2008/2009 now, when we were doing the engineering analysis for what processor to put into the chipset we were pitching for that company - because the company hadn’t been funded. And you have to know what path you’re gonna do - clearly not going to license an arm core, and the whole thing is closed, and you don’t know what’s in there. So alright, let’s sit down and do an analysis at the time the choices were really do we do a 68000 from scratch? Do we continue working with Hitachi products, and do a do an open source SuperH. Do we do a MIPS? And my initial reaction was, Hey, let’s do a MIPS.

And we did an analysis on instruction density because it became clear from the use case, that memory bandwidth is going to be a huge problem for this design. It wasn’t a compute platform, as I mentioned in the talk in Taiwan - It was an it an instrumentation platform that had 24 streams running at 12 and a half Mega samples per second into it.

So 300,000 mega samples per second of sample data coming in from time synchronized sensors and it’s got to stream DMA into into the memory that the CPU core is connected to. And you still need to be able to run programs reasonably fast and all of that stuff. And so it turns out when you look at the claims that are made by the RiscV crowd - You know the folks that the folks that that made SPARC and MIPS all come out of the University California system, and their arguments is always been that a simpler instruction set leads to better implementation that can clock faster, easier to maintain - is easier to build quickly. All of those good things that you that you really want. Easier to validate and verify.

Well from a performance perspective, it doesn’t really work out that way. And so we came across the SuperH architecture in a previous project, and we benchmarked it against anything else and it was just so much better from the perspective of how much memory does the instruction stream actually consume - you know what bandwidth of the memory does it actually consume? And your eyes just pop out of your head. You look at this thing, and it’s like there’s just no other choice right? And it’s a combination of 2 things, it’s a combination of the smaller width of the instruction. Okay, you get 2 times a speed up there, because the instructions are half the width - they’re 16 bits wide and they’re fixed length. But it’s also that instruction stream has been optimized to match with what the compiler will emit. So maybe it’s 1.5 times denser from an instruction to instruction perspective than you would get with a strictly RISC design. And so the SuperH then took a complex instruction decoder and bolted it onto the front of a fairly traditional looking 5 stage RISC engine. And you get the win from both sides. You’ve got this really complicated instruction decode process, but once you’ve done it, you’re fine, you’re good to go. And the actual back end of the processor, the instruction pipeline is really simple. It’s a RISC design. So this was you know, kind of a revelation. That’s okay. Take a deep breath. We’re gonna do this thing and it also turns out that there are other companies who come to that conclusion as well.

Here in Japan The SuperH architecture was in basically every phone from 1990 until I don’t know; 2004 kinda timeframe. So we’re talking about millions or billions of devices. It’s in every engine controller in the large German cars in the same time period. So one of the most deployed, widely deployed instruction set architectures, again below the radar where nobody can see it in the world.

Kirin V.: Yeah, it certainly has that pedigree. But as you say, it’s under the radar. So what do you think generated the excitement for RiscV when J-Core has not been picked up? From what I’ve seen, it seems as though you’ve done a lot of the hard yards to develop this and make it something that could be implemented by a number of different organizations if they wanted to. And you’ve put this out there and it’s kind of been passed over/ignored. Why do you think RiscV has captured the imagination where J-Core hasn’t? Even though it has those technical advantages in the memory bandwidth space?

D. Jeff Dionne: Also in the compiler optimization space and all that. I think there’s 2 reasons. I think one is because the team out of out of Berkeley did a very very good job at creating a community and the J-Core team is mostly focused on building real product with our technology. And I’m not saying that these things are incompatible. What I’m saying is, you have to deploy the resources you have in the place where it makes the most sense for you to do that. And the other reason, I think, is that complex instruction decoder means that if you want a validable, completely correct SuperH implementation, you’ve got a long road to travel in order to get there.

Kirin V.: And that can be very important for a number of industries.

D. Jeff Dionne: Very important, right. And so, we did that. And the team at Hitachi did that and the team at ST Micro electronics did that, but other people consider that to be somewhat of a daunting task, I think. And I really don’t think that’s a negative. At the end of the day we as Core Semi, we’ll deploy more resources to try and bring that capability to the open source community and maybe we’ll be successful at it. But if we’re looking at what we’ve learned over the past, you know, several years. I think the Berkeley guys have had quite a successful campaign building the community. And and we’ve not matched that by any stretch.

Kirin V.: That validation piece you’re talking about - Is there like a template for validation that’s been shared as part of the work you’ve done and published openly? Or is that something that’s specific to your companies and the implementations you’ve made?

D. Jeff Dionne: Yeah, it’s open. It’s in the repository. It’s just not called out like the like, the compliance suite for RiscV is. So you can get it. But it’s just kind of a part of the build process, right? As opposed to a a tool that is designed to interact with with whatever design you’re working on.

Kirin V.: Yeah 100%. You were talking before about community and having a hub that the RiscV folks made through their Berkeley work. What do you see as the hub of the J-Core community? Because I’ve done a bit of poking around and I couldn’t point to any one place and say, This is the hub. This is the place that people gather to discuss their work on J-Core.

D. Jeff Dionne: Yeah, I think that’s right, too. We used to have a mailing list where where we tried to do that and stewardship of the mailing list is awfully difficult. What we found is that there were a lot of folks with many opinions about how the platform should evolve and what should be in the instruction set and that’s not the approach that we wanted to take, so we didn’t manage to coalesce a community. On the other side the RiscV folks seem to have found a way to do that by fostering discussion around extensions. So there’s now, a proliferation of extensions for RiscV, which we consider to be a real serious problem. Fragmentation of the instruction set is gonna lead to very much disappointment when people get this or that or the other, and try to figure out what compiler switches to turn on and off.

Kirin V.: Well, we’re already seeing that in RiscV based chips to a certain extent around some of the boards that are coming out that don’t have capabilities that particular applications deem as required.

D. Jeff Dionne: That’s right. And and so we we didn’t want to go down that path because we assumed that that would be the natural outcome, and unfortunately we were right about that. And and I think that that’s a challenge that the other community will manage to overcome eventually. I was talking to someone who proposed some extensions to the, RiscV architecture yesterday and his comment was; Now one of the main important things is to somehow or another come up with subsets of these extensions that make sense, and are the recommended extension, so that there may be fragmentation, but at least it makes sense, right? And you can get a subset of that in certain classes and products. And you know the rule of least surprise. Right? Does it violate? So I think that’s a positive thing. On the other hand, from the point of view of J-Core - the SuperH patent. The SuperH initial research is very, very well done, and if you ask the question, well, why wasn’t there an instruction or an extension to do this? There’s a reason for that, right? And some of our folks who were at Hitachi have those documents, and we can answer that question. And as a result of there being answers, people maybe not felt like the community was open. Because as I said, we were trying to to achieve something and present a complete whole.

Kirin V.: Yeah, I can see why that would happen. Because the you don’t need to have this open discussion about the decisions, because that discussions already happened many, many years ago.

D. Jeff Dionne: That’s that’s exactly right. And and so that wasn’t necessarily good from a community perspective. And I think that was a limiting factor for us, and I’m not sure how I would change how we handled that. But if there’s something to learn, it would be how to draw people into the project and the architecture while, you know, not maintaining its purity - that’s not what I mean. While maintaining its technical competence. You know what I’m saying as opposed to a proliferation of either unnecessary or conflicting extensions that may lead to a confusing and incoherent platform.

Kirin V.: Yeah, you mentioned patents, and I know from the talk Rob Ladley gave many years ago. It seemed like part of the decision around J-Core, and the use of the SuperH architecture was the expiry of patents. It wasn’t just the technical superiority. It was also - and I think this is partly because of Rob’s history with litigation in general, you could point to specific implementation details and go; Okay, well, we’re doing this exactly this way. And this has already been patented. These patents have expired. And you’re kind of preemptively worrying about that legal aspect of it. Did that come into your head as you were coming to these decisions?

D. Jeff Dionne: It was one of the major decisions that we made early on, looking to not only make an open source community, but to make something that you could actually Fab and build chips with that people bought. So it was a major design decision. Whatever we do needs to be patent clean. And you know in 2017 we spoke with the with the guys at Berkeley about about their decisions, and they said, well, look, you know, the decisions around the instruction set architecture don’t matter that much just so long as you can prove that the architecture doesn’t have any patent violations. And so that’s one of the reasons why the new construction of RiscV looks the way it does. And we did that research also. And I said that’s good; We took this lock, stock, and barrel because this particular architecture had its first commercial shipments in this year, whatever that year was I can’t remember. And the patents that were filed on it were several years before. So the entire thing as a whole combined together. Because some patents are just about combining things right? All of that stuff happened and the expiry of this intellectual property quagmire, was this particular year and we can say that definitively. Nobody is going to do anything at all. And we can prove that. And and I think you know, it’s kind of like a mutual congratulations, right? Like yes, we both realized that was the case, and our solutions to it were slightly different. The Berkeley side, looking at all of the different prior art individually, and us looking at how an instruction set architecture that shiped as a prior art and the patents that were around it - what were the dates on those? And then, okay, this coherent thing is is ready to go as of this particular date, and we continue to track the patents that you know, let’s face it some of our colleagues filed, and when those things are expiring, and when they did expire, yeah.

Kirin V.: Yeah, yeah. I’d like to talk about the current evolution of the J-Core roadmap. So you’ve gone J1, J2, J0 - have I got that order right?

D. Jeff Dionne: Yeah. Basically, we started off with with an SH2 compatible CPU core for a very simple microcontroller and we expanded that to SMP. The original SuperH architecture didn’t have good compare and swap algorithm instructions to implement that algorithm for an operating system. It had something called test and set. You need this for locking right? And so we added that to J2, and that became J2 SMP, so we can do very efficient implementation running Linux on top of a symmetric multiprocessor with J2. We added MMU to that, that looks like a J3 or an SH3, but we call it J2 MMU or J32. And going the other direction we started taking things out to let us make very, very small chips. There’s a lot of requirements for chips that go in tiny little things that sort of disappear. And you can’t afford either these large memories even with process technology scaling down. It still really matters if your code space is you know, let’s face it 8,192 Bytes or 8,193 Bytes. you know, you go over the 8k barrier by 1 Byte you’ve got to start looking for something to cut out of the program, or you need to look for a larger memory. Right? And so that’s why J1 exists. It’s designed for tiny, tiny devices. Where code efficiency is important. But also the processor needs to be really really small. The current implementation of J1, for instance - it’s about 43,000 gates in an ASIC process 130 nanometer Skywater. Which is what the open Source community really has access to through Google. Excellent program. Please keep funding that Google!

Kirin V.: I was going to say; that’s how you managed to get your ASICs done - through the Skywater program?

D. Jeff Dionne: That’s how we managed to get our open ASICs done. We we do our commercial ASICs through TSMC. So yeah, the open source side, we do that through Skywater. But the chips we do for our company. They they don’t go through skywater because that’s not a mature process yet.

Kirin V.: The chips that come out of the Skywater production run - what is the target for those? What do they end up being used in?

D. Jeff Dionne: Yeah. So right now, that’s silicon proven for J-Core or open source, and the Yosys GHDL open lane tool flow. That’s that’s what we’re using those for. We haven’t actually done a chipset that’s being used for something through there. We just don’t think it’s quite ready yet.

Kirin V.: Referencing again, that talk you gave recently in Taiwan - you mentioned that the layout didn’t look ideal and that was because of the open source tooling if understand correctly?

D. Jeff Dionne: Yeah, so kind of really interesting. Right? Like, you start to go down this path and as we talked about before the expertise needed to to do this kind of thing is not something [where] there’s so many people with an itch to scratch that you’ve got a large talent pool to draw from. So when you look at something like Open Lane / Open Road, the number of contributors that are actually writing the code is quite small and a lot of them have come out of the university environment and funded by DARPA. Some of them are funded by the Google programs, that have been interacting with Sky 130. And so the number of folks involved is not large enough for that project to actually have the critical mass with something like Linux got. And so there are holes in there that are difficult to patch right? Like from the perspective of timing closure, which is what I was, you know, complaining about. I’m good at complaining, lot of open source people are good at complaining. You can place and route a CPU core using the open lane tool we patch them we bring the net list in from GHDL it goes through the router. You get a hard macro out the other side. What you don’t get is all the liberty files, the timing files so that you can use that macro and build a larger chip out of it. Okay, well, we kind of do that by hand and put a bunch of blocks together. But now you don’t get timing closure across the entire chip.

Kirin V.: Can you explain what timing closure means?

D. Jeff Dionne: Timing closure means; The entire chip is clocked some way. There’s a clock tree, you have a pin that takes the clock in, and then it’s distributed to the rest of the blocks in the chip. You want that timing; you want the clock to appear at the clock inputs to all the registers in the design at a known time. So you need to track all the delays in the clock tree. And you want the data coming to those registers to appear before the clock and not appear after the clock. And so you need to track each and every track inside the chip, and you need to extract its resistance and capacitance and calculate how long the delay is. So that you are 100% absolutely certain that when the clock appears at that pin on the clock tree. That’s you know - however many micro meters away from the clock pin, and it’s gone through. However many buffers the data is ready for that latch to capture it and unfortunately it also needs to be there for long enough after, so that the latch can keep it. Otherwise it doesn’t remember right? It latches the new value instead. But you’ve got these 2 things; You’ve got setup time, and you’ve got hold time from the edge, not of the clock that came into the chip, but when it appeared at the flip-flop okay? And so the problem is that the tools need to be sophisticated enough to track all these edges, the edges of the data when the data changes and the edges of the clock all the way through the clock tree so that each and every place where this needs to be correct, it is correct. And the open source tools don’t get that right at this time. They do for a single block of RTL Code, like your CPU design, it will probably with pretty good certainty get it right within one block. But as soon as you replicate your CPU onto the die area, and then you connect it to the RAM it all goes away at that point and it can’t trace inside the CPU core to make sure that the data source is going to appear at the input to the address buffer on the RAM at the right clock edge. And the result is just that the thing doesn’t work at all, and there’s nothing you can do. Because if you slow it down, maybe you fix the setup time; Okay fine, the data appeared. But maybe the signal path for that data is faster than the clock tree, in which case it latches the next value, not the previous one - not the one you’re trying to latch. So this resulted in unfortunately, the first few Multi Project Wafer tape outs (MPW) that Google did not functioning at all. And not only in the designs that were submitted by the community, but also internally their RiscV design that was supposed to be the supervisor for the chip. It wouldn’t run at all - As a result of this. So that took a year to sort out and our opinion is, there are still a lot of things to do and we’re kind of happy to help. But again, our interaction with the community is limited by our main focus, which is you know building things that people use right? And we would love to be properly funded and spend our time making all of this open source tooling. Do the things that we need. And we’re working on that.

Kirin V.: Well, that sounds like a great place to leave it there, unless there’s something you you’d like to say, or anything you’d like to add?

D. Jeff Dionne: No, I appreciate the opportunity to to talk about all these things, it’s an exciting time. Actually, everything is just now becoming to a place where you can do real practical, interesting things. Like an example; as I mentioned in the talk in Taiwan is that the Ice 40 tool chain; NextPnr, Yosys, Ice 40 FPGAs from Lattice (used to be Silicon Blue) those guys were great. You can now put a J-Core, or okay you can put a RiscV in there too, if you really want. And there’s enough memory on board that you can run a real serious program, and you don’t need anything else at all. Well, you need a power supply, but that can be done right? And these chips are cheap enough that you can actually build a fair volume of devices for a really cheap price and do something serious, not just blink an LED. You can do real things with that. Like we are in the process right now of building a customer product and it has a J-Core on a Ice 40 FPGA, with a SerDes (serial/de-serial engine) talking to analog digital converters and doing some heavy lifting for battery management. This this kind of thing is now possible. On the other side, the open source tooling is gonna get to this point, we’re gonna make sure of it. Like Google, I’m sure they’re gonna find a way to help do that. Efabless is gonna find a way to do that. IHP, with their new open process PDK, that they’re that they’re releasing. I think that’s gonna be a big part, too. Even global foundaries is getting in on it. In the next couple of years we’re gonna see suddenly a change in this industry where if you want to do a design, you don’t immediately think; Ok well, I’ve got a pony up the money for an ARM license, and then I’m gonna hire a design team that’s done it before. You’re gonna do it on your desktop. You’re gonna prototype it in an FPGA, provided we can figure out how to keep the FPGA companies happy, right? And you’re gonna tape out in an open process where everything is safe, secure, and transparent to you. This is the time to watch this stuff.

Kirin V.: Alright. Well, thanks very much, Jeff. I appreciate your time. And good luck with all of the hard problems.

D. Jeff Dionne: Yeah, much appreciate it. Thanks again.


(shortly after my conversation with Jeff I asked the following questions via email that I had missed during our talk).

Kirin V.: Your focus on shipping embedded products means your company probably doesn’t have a burning need for multi-issue and the complexity that is introduced by J4. Do you expect to see development of the J4 (or J64) in the next few years?

D. Jeff Dionne: Yes, we’re thinking about roadmap. We will certainly do a performant version, and the discussions are ongoing. There is not really a good proposal yet for what a 64bit version should be, and you’ll note that the SH5 support (which we internally don’t think was an SH at all) has just been removed from the Linux kernel. Multi issue and even out of order are probably more near term. FPU is just about complete, that will be first.

Kirin V.: Can your open GPS implementation be used by amateur balloonists and hobbyists who need a solution that continues to work at high altitude? (for example see here: http://ukhas.org.uk/doku.php?id=guides:gps_modules)

D. Jeff Dionne: Yes, the GPS engine RTL is basically good to go. The RTL has been there for quite a while https://github.com/j-core/gnss-baseband but the software to drive it is entangled with other things. We need to take some time and make a clean release of this, it is perhaps not surprising that there has been so much interest in a fully open GNSS stack… we needed one after all. When we can get to making a full open release, maybe depends on having someone who might actually use it. So do make any introductions if you think it will be useful to them.

Please email enquiries@riscvnews.com if you’d like to get in touch.